Search Results

Search found 51569 results on 2063 pages for 'version number'.

Page 366/2063 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • CCUserDefault, iOS/Android and game updates

    - by Luke
    My game uses cocos2d-x and will be published on iOS platform first, later on Android. I save a lot of things with CCUserDefault (scores, which level was completed, number of coins taken, etc...). But now I have a big doubt. What will happen when the game will receive its first update? CCUserDefault uses an XML file stored somewhere in the app storage space. This file is created and retained until one uninstalls the app. I am wondering what happens when the app is updated. Will the old XML file be maintained? Because if not, how should I handle app updates (updates in the sense that 2, 3 or more new level packages will be added, but the informations about the old ones, like scores, which level was finished and which not, number of coins, etc., need absolutely not to be lost)?

    Read the article

  • CodePlex Daily Summary for Sunday, November 28, 2010

    CodePlex Daily Summary for Sunday, November 28, 2010Popular ReleasesMDownloader: MDownloader-0.15.25.7002: Fixed updater Fixed FileServe Fixed LetItBitNotepad.NET: Notepad.NET 0.7 Preview 1: Whats New?* Optimized Code Generation: Which means it will run significantly faster. * Preview of Syntax Highlighting: Only VB.NET highlighting is supported, C# and Ruby will come in Preview 2. * Improved Editing Updates (when the line number, etc updates) to be more graceful. * Recent Documents works! * Images can be inserted but they're extremely large. Known Bugs* The Update Process hangs: This is a bug apparently spawning since 0.5. It will be fixed in Preview 2. Until then, perform a fr...Flickr Schedulr: Schedulr v2.2.3984: This is v2.2 of Flickr Schedulr, a Windows desktop application that automatically uploads pictures and videos to Flickr based on a schedule (e.g. to post a new picture every day at a certain time). What's new in this release? Videos are now supported as well. The application can now automatically add pictures to the queue when it starts up, from specific monitored folders. Sets and Groups can now be displayed as text only (without icons) to speed up the application. Fixed issue that ta...Cropper: 1.9.4: Mostly fixes for issues with a few feature requests. Fixed Issues 2730 & 3638 & 14467 11044 11447 11448 11449 14665 Implemented Features 6123 11581PFC: PFC for PB 11.5: This is just a migration from the 11.0 code. No changes have been made yet (and they are needed) for it to work properly with 11.5.PDF Rider: PDF Rider 0.5: This release does not add any new feature for pdf manipulation, but enables automatic updates checking, so it is reccomended to install it in order to stay updated with next releases. Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0...XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.Minecraft GPS: Minecraft GPS 1.1: 1.1 Release New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.Sexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6DotSpatial: DotSpatial 11-21-2010: This release introduces the following Fixed bugs related to dispose, which caused issues when reordering layers in the legend Fixed bugs related to assigning categories where NULL values are in the fields New fast-acting resize using a bitmap "prediction" of what the final resize content will look like. ImageData.ReadBlock, ImageData.WriteBlock These allow direct file access for reading or writing a rectangular window. Bitmaps are used for holding the values. Removed the need to stor...New ProjectsAC: Just a simple AC ProjectAvailability Manager Project (CMPE290): Availability Manager Project (CMPE290)BCLExtensions: BCLExtensions is an extension method library for .NET 3.5 or higher which offers various extension methods for classes in the .NET Base Class Library (BCL)Command Browser 2.0 for Visual Studio: The Command Browser is a visual studio add-in that Lets you browse, and execute Visual Studio commands on the fly. It’s very handy for people that don’t tend to use bindings on daily basis and simply wants to learn new commands out of interest.EDM And T4: Generate a WCF Service from your edmxEmperor of the Multiverse: Szoftverarchitekturak rocks :)KINECT CALL: Kinect CallMultiSafepay for nopCommerce: This is MultiSafepay shop plugin voor nopCommerce. The current version supported is nopCommerce 1.60.MyGadgebydeleted: MyGadgebydeletedNHashcash: NHashcash is a .NET implementation of the Hashcash mechanism for paying for e-mail with burnt CPU cycles.OhYeah!: WP7 Note Demo ApplicationOMax - Online Market and Exchange for Real Estate: OMax Project - Online Real Estate Management System OMax stands for Online Market and eXchange and is a Real Estate management system focusing on both individuals (brokers, agents) and organizations (real estate companies) looking for online property management solutions.Opus - Silverlight UI Framework: The opus framework provides a simple way for creating UIs for Silverlight by leveraging your current MVVM structure. Outliner, the edge detection utility: Outliner is a vectorizer of the edges in the raster pictures. The new edge detection algorithm is used. It is developed in Visual C++ 2010 Express Edition, using WinAPI and STL.SharpDropBox Client for .NET: SharpDropBox Client is a DropBox client for (at first) WP7. It caches files/directories in ISO storage, and also uses Sterling DB to metadata info so that it can easily do hash checks against DropBox making it download only when there are changes. It can also be used offline.SicunSimpleDiary: ???????(????)Silverlight Navigation With Caliburn.Micro: Bring the power of the Cabliurn.Micro MVVM framework to Siliverlight 4 navigation applications. This sample application for CM shows how it can be used to create MVVM friendly Navigation applications which formerly required a view-first approach. SilverTrace - a tracer and logger for Silverlight applications: SilverTrace - a tracer and logger for Silverlight applications.StackDeck: StackDeck is a StackOverflow analysis tool written in WPFSVNHelper -- a small clean tool for visual studio: A small tool that used to clean your bin direcotry and temporary directory for the whole solutions. I usually use it before commit soure to a SVN Server. System.Xml.Serialization: Expressions.Compiler is a lightweight xml serialization framework. It is an addition to the standart .NET XML serialization. It is designed to support out-of-the-box features: * interface-driven Xml serialization * class-driven Xml serialization * runtime serializationVoXign: - Proposer une formation introductive à la langue des signes, en e-Learning. - autonomie de gestion de l’emploi du temps, du rythme - apprentissage d’un lexique de « première nécessité » - introduction à la culture sourde et approche des spécificités linguistiques de la LSF Weyland: A .NET metaprogramming framework.

    Read the article

  • Is there a pedagogical game engine?

    - by K.G.
    I'm looking for a book, website, or other resource that gives modern 3D game engines the same treatment as Operating Systems: Design and Implementation gave operating systems. I have read Jason Gregory's Game Engine Architecture, which I enjoyed. However, by intent the author treated components of the architecture as atomic units, whereas what I'm interested in is the plumbing between those units that makes a coherent whole out of ideally loosely coupled parts. In books such as these, one usually reads that "that's academic," but that's the point! I have also read Julian Gold's Object-oriented Game Development, which likewise was good, but I feel is beginning to show its age. Since even mobile platforms these days are multicore and have fast video memory, those kinds of things (concurrency, display item buffering) would ideally be covered. There are other resources, such as the Doom 3 source code, which is highly instructive for its being a shipped product. The problem with those is as follows: float Q_rsqrt( float number ) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; // evil floating point bit level hacking i = 0x5f3759df - ( i >> 1 ); // what the f***? y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; } To wit, while brilliant, this kind of source requires more enlightenment than I can usually muster upon first read. In summary, here's my white whale: For an adult reader with experience in programming. I wish I could save all the trees killed by every. Single. Game Programming book ever devoting the first two chapters to "Now just what is a variable anyway?" In C or C++, very preferably C++. Languages that are more concise are fantastic for teaching, except for when what you want to learn is how to cope with a verbose language. There is also the benefit of the guardrails that C++ doesn't provide, such as garbage collection. Platform agnostic. I'm sincerely afraid that this book is out there and it's Visual C++/DirectX oriented. I'm a Linux guy, and I'd do what it takes, but I would very much like to be able to use OpenGL. Thanks for everything! Before anyone gets on my case about it, Fast inverse square root was from Quake III Arena, not Doom 3!

    Read the article

  • Good Customer Service Example

    - by MightyZot
    Here’s another good customer service example for you! My wife purchased a Galaxy last week and she loves the phone.  She asked me to add it to our AT&T Microcell last night. I purchased the AT&T Microcell a couple of years ago, because cell signal out where I live sucks! Since microcells are managed on the AT&T web site, I went to the site and tried to sign in. Naturally, having not managed that microcell in a couple of years…and much to my chagrin…I discovered that I didn’t know my password OR my user ID. So, I decided to call and see if I could get my account reset that late in the day (we’re talking last night, so it was well after 7pm.) I called the technical support line, touched the appropriate numbers to navigate to microcell support, turned on my speaker phone, and prepared for the long wait. After about 45 seconds I was delighted to hear “Jeffrey” break in and ask what he could help me with. I explained that I have not managed my microcell for some time and had forgotten the user name and password.  “No problem”, he replied, and he asked me for the line I used to register the microcell. After confirming the last four digits of my IMEI number, he asked me for my wife’s number. I gave him my wife’s number and he said, “I’ve taken care of it Mr Pope. Just have her reboot her phone and you should see your microcell.” We rebooted her phone, it connected to the microcell, and voila, she was online! “Is there anything else I can help you with while I’ve got you on the line”, he said. “Nope”, I replied. “Ok, have a great night.” What made this a great customer service experience for me was that “Jeffrey” didn’t stop at giving me my user account and password, which I would probably forget anyway after setting up my wife’s new phone. Instead, he solved the real problem for me – adding my wife’s new phone to my microcell. Great job Jeffrey!

    Read the article

  • System does not detect USB pendrives

    - by cshubhamrao
    This USB thing is driving me crazy. 2 problems in time span of 3 hours. Ok I was already trying to cope up with "wrong fs type, bad option, bad superblock" error while mounting FAT Drives when to my amazement I discovered that none of the USB Storage devices showed up in the system Useful outputs: - tail /var/log/syslog: root@shubham-pc:~# tail /var/log/syslog Nov 7 21:41:47 shubham-pc colord: device removed: sysfs-HP-v250w Nov 7 21:41:51 shubham-pc kernel: [ 3441.529542] usb 1-1: USB disconnect, device number 11 Nov 7 21:41:53 shubham-pc kernel: [ 3443.820029] usb 1-2: new high-speed USB device number 14 using ehci-pci Nov 7 21:41:54 shubham-pc kernel: [ 3443.952897] usb 1-2: New USB device found, idVendor=0781, idProduct=5530 Nov 7 21:41:54 shubham-pc kernel: [ 3443.952905] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Nov 7 21:41:54 shubham-pc kernel: [ 3443.952909] usb 1-2: Product: Cruzer Nov 7 21:41:54 shubham-pc kernel: [ 3443.952913] usb 1-2: Manufacturer: SanDisk Nov 7 21:41:54 shubham-pc kernel: [ 3443.952917] usb 1-2: SerialNumber: 20060876420EC6016847 Nov 7 21:41:54 shubham-pc mtp-probe: checking bus 1, device 14: "/sys/devices/pci0000:00/0000:00:1d.7/usb1/1-2" Nov 7 21:41:54 shubham-pc mtp-probe: bus: 1, device: 14 was not an MTP device

    Read the article

  • Implementing a bit shift using AND, NOT, ADD [closed]

    - by fdart17
    I'm implementing a 16-bit left bit shift by r bits, and I only have access to AND, NOT and ADD. There are 3 condition codes, negative, zero and positive, which are set when you use any of these operations. How I went about it was : (1) And the number with 1000 0000 0000 0000 to set condition codes to positive if the most significant bit is 1. (2) Add the number with itself. This shifts bits one to the left. (3) If the MSB was 1, add 1 to the result. (4) Loop threw (1)-(3) r times. I'm wondering if anyone has any hints to some more efficient methods? Thanks!

    Read the article

  • My error with upgrading 4.0 to 4.2- What NOT to do...

    - by Steve Tunstall
    Last week, I was helping a client upgrade from the 2011.1.4.0 code to the newest 2011.1.4.2 code. We downloaded the 4.2 update from MOS, upload and unpacked it on both controllers, and upgraded one of the controllers in the cluster with no issues at all. As this was a brand-new system with no networking or pools made on it yet, there were not any resources to fail back and forth between the controllers. Each controller had it's own, private, management interface (igb0 and igb1) and that's it. So we took controller 1 as the passive controller and upgraded it first. The first controller came back up with no issues and was now on the 4.2 code. Great. We then did a takeover on controller 1, making it the active head (although there were no resources for it to take), and then proceeded to upgrade controller 2. Upon upgrading the second controller, we ran the health check with no issues. We then ran the update and it ran and rebooted normally. However, something strange then happened. It took longer than normal to come back up, and when it did, we got the "cluster controllers on different code" error message that one gets when the two controllers of a cluster are running different code. But we just upgraded the second controller to 4.2, so they should have been the same, right??? Going into the Maintenance-->System screen of controller 2, we saw something very strange. The "current version" was still on 4.0, and the 4.2 code was there but was in the "previous" state with the rollback icon, as if it was the OLDER code and not the newer code. I have never seen this happen before. I would have thought it was a bad 4.2 code file, but it worked just fine with controller 1, so I don't think that was it. Other than the fact the code did not update, there was nothing else going on with this system. It had no yellow lights, no errors in the Problems section, and no errors in any of the logs. It was just out of the box a few hours ago, and didn't even have a storage pool yet. So.... We deleted the 4.2 code, uploaded it from scratch, ran the health check, and ran the upgrade again. once again, it seemed to go great, rebooted, and came back up to the same issue, where it came to 4.0 instead of 4.2. See the picture below.... HERE IS WHERE I MADE A BIG MISTAKE.... I SHOULD have instantly called support and opened a Sev 2 ticket. They could have done a shared shell and gotten the correct Fishwork engineer to look at the files and the code and determine what file was messed up and fixed it. The system was up and working just fine, it was just on an older code version, not really a huge problem at all. Instead, I went ahead and clicked the "Rollback" icon, thinking that the system would rollback to the 4.2 code.   Ouch... What happened was that the system said, "Fine, I will delete the 4.0 code and boot to your 4.2 code"... Which was stupid on my part because something was wrong with the 4.2 code file here and the 4.0 was just fine.  So now the system could not boot at all, and the 4.0 code was completely missing from the system, and even a high-level Fishworks engineer could not help us. I had messed it up good. We could only get to the ILOM, and I had to re-image the system from scratch using a hard-to-get-and-use FishStick USB drive. These are tightly controlled and difficult to get, almost always handcuffed to an engineer who will drive out to re-image a system. This took another day of my client's time.  So.... If you see a "previous version" of your system code which is actually a version higher than the current version... DO NOT ROLL IT BACK.... It did not upgrade for a very good reason. In my case, after the system was re-imaged to a code level just 3 back, we once again tried the same 4.2 code update and it worked perfectly the first time and is now great and stable.  Lesson learned.  By the way, our buddy Ryan Matthews wanted to point out the best practice and supported way of performing an upgrade of an active/active ZFSSA, where both controllers are doing some of the work. These steps would not have helpped me for the above issue, but it's important to follow the correct proceedure when doing an upgrade. 1) Upload software to both controllers and wait for it to unpack 2) On controller "A" navigate to configuration/cluster and click "takeover" 3) Wait for controller "B" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 4) Wait for controller "B" to apply the update and finish rebooting 5) Login to controller "B", navigate to configuration/cluster and click "takeover" 6) Wait for controller "A" to finish restarting, then login to it, navigate to maintenance/system, and roll forward to the new software. 7) Wait for controller "A" to apply the update and finish rebooting 8) Login to controller "B", navigate to configuration/cluster and click "failback"

    Read the article

  • Why are so many questions closed? [closed]

    - by Kim Jong Woo
    Why is there so many questions on this stackexchange site closed? I mean far more than usual. Even very high quality discussions are closed. Doesn't this high number of closed questions with high number of views and good quality of content seem like that the current policy that governs the criteria for appropriate question might be going against nature? I mean it feels as if lot of questions or discussions are everything surrounding programmer, programming, and need not be objective or seeking definitive answer. It appears lot of questions are of inquisitive nature seeking insight into other programmers and finding common subjects of interest. Is it possible for mods to relax a bit? I mean lot of great questions with [closed] tag everywhere doesn't do justice. This question in itself is a perfect example of what I am talking about and it will be closed. But I think my point is clear.

    Read the article

  • statistics for checking imported data?

    - by user1936
    I'm working on a data migration of several hundred nodes from a Drupal 6 to a Drupal 7 site. I've got the data exported to the new site and I want to check it. Harkening back to my statistics classes, I recall that there is some way to figure out a random number of nodes to check to give me some percentage of confidence that the whole process was correct. Can anyone enlighten me as to this practical application of statistics? For any given number of units, how big must the sample be to have a given confidence interval?

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Using dot To Access Object Attribute and Proper abstraction

    - by cobie
    I have been programming in python and java for quite a number of years and one thing i find myself doing is using the setters and getters from java in python but a number of blogs seem to think using the dot notation for access is the pythonic way. What I would like to know is if using dot to access methods does not violate abstraction principle. If for example I implement an attribute as a single object and use dot notation to access, if I wanted to change the code later so that the attribute is represented by a list of objects, that would require quite some heavy lifting which violates abstraction principle.

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • How to get the correct battery status?

    - by GUI Junkie
    At this moment, ever since I installed Ubuntu on this machine, the battery status says: not present. Looking at this answer, however, I find that /proc/acpi/battery/BAT1/info (sometimes its /proc/acpi/battery/BAT0/info, use tab complete to help) has the following info: present: yes design capacity: 4400 mAh last full capacity: 4400 mAh battery technology: rechargeable design voltage: 11100 mV design capacity warning: 300 mAh design capacity low: 132 mAh cycle count: 0 capacity granularity 1: 32 mAh capacity granularity 2: 32 mAh model number: BAT1 serial number: 11 battery type: 11 OEM info: 11 In accordance to this answer, I've checked the /proc/acpi/battery/BAT1/state file: present: yes capacity state: ok charging state: charged present rate: unknown remaining capacity: unknown present voltage: 10000 mV The acpi -b command returns: Battery 0: Unknown, 0%, rate information unavailable Any suggestions on getting the battery info updated?

    Read the article

  • ATG Live Webcast Nov. 29th: Endeca "Evolutionizes" E-Business Suite

    - by Bill Sawyer
    If you have ever wanted any of the following within Oracle E-Business Suite: Complete Data View Advanced Searching Across Organizations and Flexfields Advanced Visualization including Charts, Metrics, and Cross Tabs Guided Navigation Then you might want to attend this webcast to learn more about Oracle Endeca's integration with Oracle E-Business Suite. Oracle Endeca includes an unstructured data correlation and analytics engine, together with catalog search and guided navigation capabilities. This webcasts focuses on the details behind Oracle Endeca's integration with Oracle E-Business Suite. It demonstrates how you can extend the use of Oracle Endeca into other areas of Oracle E-Business Suite. Date:             Thursday, November 29, 2012Time:             8:00 AM - 9:00 AM Pacific Standard TimePresenter:   Osama Elkady, Senior DirectorWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103192To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  595335921If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • How to spot a good social media marketer

    - by fiftyeight
    It's a bit subjective but helpful and on-topic IMO, mayebe should be made community wiki I've been in contact with some guest bloggers lately which are interested in publishing posts on my website's blog, so far I've done a regular weekly blog and had a guy marketing it, but he's too busy to do more work right now. Now I need someone to market these blog posts on social media, the last guy I just got by recommendation from a friend, but now I need to find one myself. What criteria should I check when it comes to social media marketers? Do the number of followers and fans on their accounts and/or the number of votes they get for articles they market mean anything, or is impossible to know if it's spam? That's about the only criterion I could think of so far... Thanx to anyone who helps

    Read the article

  • Is it a good idea to use a formula to balance a game's complexity, in order to keep players in constant flow?

    - by user1107412
    I read a lot about Flow theory and its applications to video games, and I got an idea sticking in my mind. If a number of weight values are applied to different parameters of a certain game level (i.e. the size of the level, the number of enemies, their overal strength, the variance in their behavior, etc), then it should be technically possible to find an overal score mechanism for each level in the game. If a constant ratio of complexity increase were empirically defined, for instance 1,3333, or say, the Golden Ratio, would it be a good idea to arrange the levels in such an order that the increase of overal complexity tends to increase that much? Has somebody tried it?

    Read the article

  • What is the replacement for the Web Intents HTML standard?

    - by Tom
    "Web Intents" were deprecated in Chrome 24 (November/2011) and are no longer supported in any browser: We also gathered a lot of valuable data and feedback from our experimental support for Web Intents and decided to disable the feature in today's Beta release. Is there an HTML5 standard that I can look into as an alternative to what Web Intents intended? I'm interested in how web services can be stitched together. For example, imagine a website that can import a image from any number of web-services, modify the image in some way, then push the file back to any number of other web-services, all via HTML5 standards.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >