Search Results

Search found 46104 results on 1845 pages for 'run dialog'.

Page 447/1845 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • Java applet game design no keyboard focus

    - by Sri Harsha Chilakapati
    THIS IS PROBABLY THE WRONG PLACE. POSTED ITHERE (STACKOVERFLOW) I'm making an applet game and it is rendering, the game loop is running, the animations are updating, but the keyboard input is not working. Here's an SSCCE. public class Game extends JApplet implements Runnable { public void init(){ // Initialize the game when called by browser setFocusable(true); requestFocus(); requestFocusInWindow(); // Always returning false GInput.install(this); // Install the input manager for this class new Thread(this).start(); } public void run(){ startGameLoop(); } } And Here's the GInput class. public class GInput implements KeyListener { public static void install(Component c){ new GInput(c); } public GInput(Component c){ c.addKeyListener(this); } public void keyPressed(KeyEvent e){ System.out.println("A key has been pressed"); } ...... } This is my GInput class. When run as an applet, it doesn't work and when I add the Game class to a frame, it works properly. Thanks

    Read the article

  • Site Web Analytics not updating Sharepoint 2010

    - by Rohit Gupta
    If you facing the issue that the web Analytics Reports in SharePoint 2010 Central Administration is not updating data. When you go to your site > site settings > Site Web Analytics reports or Site Collection Analytics reports  You get old data as in the ribbon displayed "Data Last Updated: 12/13/2010 2:00:20 AM" Please insure that the following things are covered: Insure that Usage and Data Health Data Collection service is configured correctly. Log Collection Schedule is configured correctly Microsoft Sharepoint Foundation Usage Data Import and Microsoft SharePoint Foundation Usage Data Processing Timer jobs are configured to run at regular intervals One last important Timer job is the Web Analytics Trigger Workflows Timer Job insure that this timer job is enabled and scheduled to run at regular intervals (for each site that you need analytics for). After you have insured that the web analytics service configuration is working fine and the Usage Data Import job is importing the *.usage files from the ULS LOGS folder into the WSS_Logging database, and that all the required timer jobs are running as expected… wait for a day for the report to get updated… the report gets updated automatically at 2:00 am in the morning… and i could not find a way to control the schedule for this report update job. So be sure to wait for a day before giving up :)

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • Keeping Xv Overlay configuration throughout an X session.

    - by kriss
    After upgrading my Linux system from Ubuntu 9.04 to Ubuntu 10.10, I suceeded correcting most problems (all related to Intel 82865G Integrated Graphics Adapter support and compiz is still not working but that's another matter) but for one I only have a partial solution. Whenever I play a video the colors are much too saturated. This is really a problem for tones of skins that appears reddish (everyone seems to be coming back from a ski vacation with deep sun burns). As this effect only occurs with videos, not with pictures, I finally figured out it was related to Video Overlays configuration and I can correct it typing: xvattr -a XV_SATURATION -v 120 This change the default saturation value, which is 500 and much too high in my case, at eye sight the correct value seems to be between 100 and 150. Now my problem is that I have to type the above command each time I run a video. If I type it before running the video it has no effect, if I close the video and open a new one, I have to type it again, etc. I tried to put it in Xsession and (logically) it has no effect either. How could I do to get the correct setting whenever I run a video without typing the above command every time ?

    Read the article

  • USB blocks suspend on a Gigabyte GA-890GPA-UD3H with ATI SB700/SB800

    - by poolie
    Following on from question 12397, I'd still like to get suspend working on my Phenom II X6 / GA-890GPA desktop machine running current Maverick. When I run pmi action suspend the machine doesn't crash, but it also doesn't suspend. The kernel logs show: PM: Syncing filesystems ... done. PM: Preparing system for mem sleep Freezing user space processes ... (elapsed 0.02 seconds) done. Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done. PM: Entering mem sleep Suspending console(s) (use no_console_suspend to debug) pm_op(): usb_dev_suspend+0x0/0x20 returns -2 PM: Device usb8 failed to suspend async: error -2 PM: Some devices failed to suspend PM: resume of devices complete after 0.430 msecs PM: resume devices took 0.000 seconds PM: Finishing wakeup. Restarting tasks ... done. PM: Syncing filesystems ... I've tried disconnecting all the USB devices, and then connecting in to run pmi over ssh, and I get the same failure. With everything unplugged, I see the following usb devices: Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub and lspci shows the physical devices are: 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:16.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:16.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 02:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) Booting with no_console_suspend makes no difference.

    Read the article

  • Can you add doubleclick macros to exisiting ads

    - by picus
    Setup: A few weeks back I made some very simple html5 "ads" to run on a few of our partner sites. They weren't paid ads as we also manage these sites, however there are a few of them, so I made a modular solution that is hosted on one of our web servers and included on each page via javascript which outputs an iframe. Each search (ad has a search box) or click appends a url param that we track using custom vars in Google Analytics. In essence, the ad is a HTML page served in an iframe via javscript. Problem: We have an opportunity to run these ads on a third party site, I had sent them a brief how-to for inserting them and they came back saying that: The creative code doesn't contain the %u macro. We can’t substitute the default click-through URL without it. I am somewhat familiar with doubleclick from a web developer's POV, i have inserted DC dart tags before and even have implemented the ad tool for publishers. I have not, however, actually ever created an ad for the doubleclick network before. I assume the publisher needs these tags to track clicks and hence charge us. However, they have not responded to me in regards to these questions. Are macros something I can just add to or replace the existing links with, or do I need to completely setup the ad with doubleclcik - a big issue in the short term given we do not have a advertiser's account set up with them. Thanks in advance

    Read the article

  • Oracle Solaris Remote Lab (OSRL) Fact Sheet

    - by user13333379
    The Oracle Solaris Remote Lab allows independent software vendors (ISVs) to test and qualify their applications in a self service Solaris cloud. ISVs who are Oracle Partner Network Gold members with a specialization in the Solaris knowledge zone can apply for free access in OPN. The lab offers the following features to it's users: Lifetime of project: 45 days (extensions granted on demand)  Up to 5 virtual machines in a private network  Virtual Machine technology: Solaris zones  Resources per VM processor support: SPARC or x86  OS version: OracleSolaris 11.0 4GB physical memory  4GB swap space  10GB local filesystem storage  10GB network filesystem (NFS) mounted on all virtual machines Networking configuration The only external network routes are to Partner's other Virtual Machines  No network routing to the Internet  The SMB (CIFS) sharing protocol is not available between Virtual Machines  Device Access  Applications that assume the existence of /devices will not run in a Virtual Machine  Applications that use eeprom to modify SPARC eeprom setting will not run in a Virtual Machine The following utilities do not work properly in Virtual Machines:  add_drv, disks, prtconf, prtdiag, rem_dev Access technology: Secure Global Desktop, file up and download root access within VM Available VM templates (both processor architectures) Oracle Database 11g Release 2 (11.2.0.3) for Solaris with Oracle Enterprise Manager 11g Weblogic 12c  SAMP: Apache http server, PHP, MySQL, phpadmin on all templates and images: Oracle Solaris Studio 12.3 for application development  More resources: Online application for Oracle Solaris remote Lab Developer Webinar about the Oracle Solaris Remote Lab Everything an Oracle Solaris Developer needs...

    Read the article

  • Oracle Solaris Remote Lab (OSRL) Fact Sheet

    - by user13333379
    The Oracle Solaris Remote Lab allows independent software vendors (ISVs) to test and qualify their applications in a self service Solaris cloud. ISVs who are Oracle Partner Network Gold members with a specialization in the Solaris knowledge zone can apply for free access in OPN. The lab offers the following features to it's users: Lifetime of project: 45 days (extensions granted on demand)  Up to 5 virtual machines in a private network  Virtual Machine technology: Solaris zones  Resources per VM processor support: SPARC or x86  OS version: OracleSolaris 11.0 4GB physical memory  4GB swap space  10GB local filesystem storage  10GB network filesystem (NFS) mounted on all virtual machines Networking configuration The only external network routes are to Partner's other Virtual Machines  No network routing to the Internet  The SMB (CIFS) sharing protocol is not available between Virtual Machines  Device Access  Applications that assume the existence of /devices will not run in a Virtual Machine  Applications that use eeprom to modify SPARC eeprom setting will not run in a Virtual Machine The following utilities do not work properly in Virtual Machines:  add_drv, disks, prtconf, prtdiag, rem_dev Access technology: Secure Global Desktop, file up and download root access within VM Available VM templates (both processor architectures) Oracle Database 11g Release 2 (11.2.0.3) for Solaris with Oracle Enterprise Manager 11g Weblogic 12c  SAMP: Apache http server, PHP, MySQL, phpadmin on all templates and images: Oracle Solaris Studio 12.3 for application development  More resources: Online application for Oracle Solaris remote Lab Developer Webinar about the Oracle Solaris Remote Lab Everything an Oracle Solaris Developer needs...

    Read the article

  • CodePlex Daily Summary for Thursday, June 05, 2014

    CodePlex Daily Summary for Thursday, June 05, 2014Popular Releases51Degrees - Device Detection and Redirection: 3.1.2.3: Version 3.1 HighlightsDevice detection algorithm is over 100 times faster. Regular expressions and levenshtein distance calculations are no longer used. The device detection algorithm performance is no longer limited by the number of device combinations contained in the dataset. Two modes of operation are available: Memory – the detection data set is loaded into memory and there is no continuous connection to the source data file. Slower initialisation time but faster detection performanc...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.27.0: CodeMap now indicates the type name for all members Implemented running scripts 'as administrator'. Just add '//css_npp asadmin' to the script and run it as usual. 'Prepare script for distribution' now aggregates script dependency assemblies. Various improvements in CodeSnipptet, Autcompletion and MethodInfo interactions with each other. Added printing line number for the entries in CodeMap (subject of configuration value) Improved debugging step indication for classless scripts ...Load Runner - HTTP Pressure Test Tool: Load Runner 1.1: 1. added support for measuring total in time / hits / traffic / requests 2. abstracted log, default log to console / text file 3. refactored codeKartris E-commerce: Kartris v2.6003: Bug fixes: Fixed issue where category could not be saved if parent categories not altered Updated split string function to 500 chars otherwise problems with attribute filtering in PowerPack Improvements: If a user has group pricing below that available through qty discounts, that will show in place of the qty discountMagicaVoxel: MagicaVoxel Renderer ver 0.01: MagicaVoxel Renderer (Win 0.01)ClosedXML - The easy way to OpenXML: ClosedXML 0.72.3: 70426e13c415 ClosedXML for .Net 4.0 now uses Open XML SDK 2.5 b9ef53a6654f Merge branch 'master' of https://git01.codeplex.com/forks/vbjay/closedxml 727714e86416 Fix range.Merge(Boolean) for .Net 3.5 eb1ed478e50e Make public range.Merge(Boolean checkIntersects) 6284cf3c3991 More performance improvements when saving.MCE Controller: MCE Controller V1.8.6: BETA. Adds option to disable all internal commands. If the "Disable Internal Commands" checkbox in settings is checked, MCE Controller will disable all internally defined commands (e.g. VK key codes, single characters, mouse commands, etc...) and only respond to commands defined in the MCEControl.commmands file. Attempts to fix inability for clients to reconnect after they forcibly close the connection. Build 1.8.6.37445Visual F# Tools: Daily Builds Preview 06-04-2014: This preview is released for use under a proprietary license.SEToolbox: SEToolbox 01.032.018 Release 1: Added ability to merge/join two ships, regardless of origin. Added Language selection menu to set display text language (for SE resources only), and fixed inherent issues. Added full support for Dedicated Servers, allowing use on PC without Steam present, and only the Dedicated Server install. Added Browse button for used specified save game selection in Load dialog. Installation of this version will replace older version.Service monitor: Version 3.1: Added the ability of the tool to restart itself in 'Admin' mode without UAC prompt. Note that this only works after the tool has run in 'Admin' mode at least once before. This is only supported on Vista, Windows 7 or later. See my blog post on how it is done.DNN Blog: 06.00.07: Highlights: Enhancement: Option to show management panel on child modules Fix: Changed SPROC and code to ensure the right people are notified of pending comments. (CP-24018) Fix: Fix to have notification actions authentication assume the right module id so these will work from the messaging center (CP-24019) Fix: Fix to issue in categories in a post that will not save when no categories and no tags selectedTEncoder: 4.0.0: --4.0.0 -Added: Video downloader -Added: Total progress will be updated more smoothly -Added: MP4Box progress will be shown -Added: A tool to create gif image from video -Added: An option to disable trimming -Added: Audio track option won't be used for mpeg sources as default -Fixed: Subtitle position wasn't used -Fixed: Duration info in the file list wasn't updated after trimming -Updated: FFMpegQuickMon: Version 3.14 (Pie release): This is unofficially the 'Pie' release. There are two big changes.1. 'Presets' - basically templates. Future releases might build on this to allow users to add more presets. 2. MSI Installer now allows you to choose components (in case you don't want all collectors etc.). This means you don't have to download separate components anymore (AllAgents.zip still included in case you want to use them separately) Some other changes:1. Add/changed default file extension for monitor packs to *.qmp (...VeraCrypt: VeraCrypt version 1.0d: Changes between 1.0c and 1.0d (03 June 2014) : Correct issue while creating hidden operating system. Minor fixes (look at git history for more details).Keepass2Android: 0.9.4-pre1: added plug-in support: See settings for how to get plug-ins! published QR plug-in (scan passwords, display passwords as QR code, transfer entries to other KP2A devices) published InputStick plugin (transfer credentials to your PC via bluetooth - requires InputStick USB stick) Third party apps can now simply implement querying KP2A for credentials. Are you a developer? Please add this to your app if suitable! added TOTP support (compatible with KeeOTP and TrayTotp) app should no l...Microsoft Web Protection Library: AntiXss Library 4.3.0: Download from nuget or the Microsoft Download Center This issue finally addresses the over zealous behaviour of the HTML Sanitizer which should now function as expected once again. HTML encoding has been changed to safelist a few more characters for webforms compatibility. This will be the last version of AntiXSS that contains a sanitizer. Any new releases will be encoding libraries only. We recommend you explore other sanitizer options, for example AntiSamy https://www.owasp.org/index....Z SqlBulkCopy Extensions: SqlBulkCopy Extensions 1.0.0: SqlBulkCopy Extensions provide MUST-HAVE methods with outstanding performance missing from the SqlBulkCopy class like Delete, Update, Merge, Upsert. Compatible with .NET 2.0, SQL Server 2000, SQL Azure and more! Bulk MethodsBulkDelete BulkInsert BulkMerge BulkUpdate BulkUpsert Utility MethodsGetSqlConnection GetSqlTransaction You like this library? Find out how and why you should support Z Project Become a Memberhttp://zzzproject.com/resources/images/all/become-a-member.png|ht...Tweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.CreateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweetinvi 0.9.3.1...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.New Projects12306helper: 12306 ????2112110044: dasda2112110202: asdfsdfsdfsd2112110298: TieuDan2112110315: vftgcdxfrBaidu PCS: BaiduPCSExcel2Sobek: Here the project Excel2Sobek is hosted. Excel2Sobek is a preprocessor in Excel's VBA for the SOBEK 2 hydrological and hydraulic simulation model by Deltares.Help Desk Pro: Manage a business of customer serviceMARGYE: MARGYE CMS CMR E-CommerceMemberBS: MemberBS20140605MiKroTik API: Api untuk menghubungkan aplikasi anda dengan router Mikrotik.MiniProfiler + Log4Net: Use MiniProfiler and Log4Net in ASP.NET projectStore Apps Unity App Base for Prism: This simple library provides an Application Base for a Windows Store application that automatically connects Unity to provide injection as the container.SuperCaptcha for MVC: Custom captcha control for MVCTC760240: myappwithfailuresWindows Phone 8 Dilbert Comic Reader: Basic WP8.1 app that displays comics from http://www.dilbert.com.x86 Proved: Prove what you run!

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • What does "fully supported" mean in context of Radeon Opensource Video Driver?

    - by stevecoh1
    UPDATE: This is not a request for support of my specific issue. Details of that issue are here: How to recover from bad upgrade to 13.04 (Unity very slow) . I have "solved" that issue, for the time being anyway, by loading alternative lighter weight desktops. This question was opened specifically to question the meaning of the documentation at https://help.ubuntu.com/community/RadeonDriver . END OF UPDATE There it is, in Black and White: https://help.ubuntu.com/community/RadeonDriver Fully Supported All these Radeon(HD) cards and derivatives have good 3D acceleration support. This is not an exhaustive list: ... RV610/RV630 Radeon HD 2400/2600/2700/4200/4225/4250 Yet in my case (the HD2400) this proves to be manifestly untrue, at least if "Fully Supported" means sufficient to run Unity in Ubuntu 13.04. It runs all the applications I can launch under Unity, but Unity itself is unbearably slow. It's quite striking really. Click on the "Dash" - go get a cup of coffee. Type a key in the Unity search box, wait five seconds for it to appear. Type Alt-tab and wait five seconds for the screen to finish painting. None of these issues appear outside of Unity components. As you all know, there are complaints about slow performance all over the Internet about Unity. Shouldn't this page somehow address this issue? Especially if "fully supported" doesn't mean sufficiently to run the default modern Ubuntu release. What does "fully supported" mean?

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • Disk Space Full

    - by Loki
    Setting up Ubuntu 10.04 server, the / disk space shows full under df, however the du does not show any of the space used. This has several mounts to Gluster FS'. I have tried a forced FSCK and to no avail. ~# df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 141G 132G 0 100% / none 3.0G 224K 3.0G 1% /dev none 3.0G 0 3.0G 0% /dev/shm none 3.0G 76K 3.0G 1% /var/run none 3.0G 0 3.0G 0% /var/lock none 3.0G 0 3.0G 0% /lib/init/rw /dev/sdb1 9.0T 7.1T 1.9T 80% /brick1 /dev/sdb2 9.0T 7.9T 1.1T 88% /brick2 localhost:/sanvol09 385T 330T 56T 86% /mnt/sanvol09 <- Gluster FS uses local software to contact the DFS I've attempted a tune2fs and same issue arises # du -h --max-depth=1 --one-file-system / 4.0K /selinux 0 /proc 47M /boot 31M /mnt 8.0K /brick1 8.0K /brick2 391M /lib 4.0K /opt 7.4M /bin 0 /sys 379M /var 5.6M /etc 16K /lost+found 43M /root 4.0K /srv 5.7M /home 4.0K /media 7.0M /sbin 0 /dev 4.0K /tmp 4.0K /cdrom 631M /usr 1.6G / more info # df -ih Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md0 9.0M 91K 8.9M 1% / none 746K 770 745K 1% /dev none 747K 1 747K 1% /dev/shm none 747K 32 747K 1% /var/run none 747K 1 747K 1% /var/lock none 747K 3 747K 1% /lib/init/rw /dev/sdb1 583M 1.8M 581M 1% /brick1 /dev/sdb2 583M 1.9M 581M 1% /brick2 localhost:/sanvol09 25G 76M 25G 1% /mnt/sanvol09 The final question: df show's 100% used, and its not, any other known fixes?

    Read the article

  • Moving a Cube from a GUI texture on iOS [on hold]

    - by London2423
    I really hope someone can help me in this since I am working already two days but without any result. What I' am trying to achieve in this instance is to move a GameObject when a GUI Texture is touch on a Iphone. The GameObject to be moved is named Cube. The Cube has a Script named "Left" that supposedly when is "call it " from the GUITexture the Cube should move left. I hope is clear: I want to "activated" the script in the Game Object from the Guitexture. I try to use send message but without any joy as well so I am using GetComponent. This is the script "inside" the GUITexture using Unity and C# //script inside the gameobject cube so it can move left when call it from the GUItexture void Awake() { left = Cube.GetComponent<Left>().enable = true; } void Start() { Cube = GameObject.Find ("Cube"); } void Update () { //loop through all the touches on the screeen for(int i = 0 ; i < Input.touchCount; i++) { //execute this code for current touch (i) on the screen if(this.guiTexture.HitTest(Input.GetTouch(i).position)) { //if current hits our guiTecture, run this code if(Input.GetTouch (i).phase == TouchPhase.Began) //move the cube object Cube.GetComponent<Left> (); } if(Input.GetTouch (i).phase == TouchPhase.Ended) { return; } if(Input.GetTouch(i).phase == TouchPhase.Stationary); //if current finger is stationary run this code { Cube.GetComponent<Left> (); } } } } } This is the script inside the GameObject named "Cube" that is activated from the Gui Texture and when is activated from the GUITexture should allow the cube to move left public class Left : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void OnMousedown () { transform.position += Vector3.left * Time.deltaTime; } } Before write here I search all documentation, tutorial videos, forums but I still don't understand where is my mistake. May please someone help me Thanks CL

    Read the article

  • Where to place the R code for R+Sweave+LaTeX workflow

    - by claytontstanley
    I spent the last week learning 3 new tools: R, Sweave, and LaTeX. One question that came to my mind though when working through my first project: Where do I place the majority of the R code? The tutorials that I read online placed the majority of the R code in the LaTeX .Rnw file. However, I find having a bunch of R calculations in the LaTeX file distracting. What I do find extremely helpful (of course) is to call out to R code in the LaTeX file and embed the result. So the workflow I've been using is to place 99% of my R code in my .R file. I run that file first, save a bunch of calculations as objects, and output the .Rout file once finished (to save the work). Then when running Sweave, I load up that .Rout file, so that I have the majority of my calculations already completed and in the Sweave R session. Then my LaTeX callouts to R are quite simple: Just give me the XTable stored in 'res.table', or give me the result of an already-computed calculation stored in the variable 'res'. So I push towards the minimal amount of R code in the LaTex file possible, to achieve the desired result (embedding stats results in the LaTeX writeup). Does anyone have any experience with this approach? I'm just worried I might run into trouble further down the line, when I start really trying to load up and leverage this workflow.

    Read the article

  • Advantages of relational databases over VSAM, ISAM and hierarchical data stores

    - by llaszews
    When migrating companies from legacy environments to the cloud, invariably you run into older hierarchical, flat file, VSAM, ISAM and other legacy data stores. There are many advantages to moving these databases into a relational database structure. The most important which is that most cloud providers run on relational database models. AWS, for example, supports Oracle, SQL Server, and MySQL. The top three 'other reasons' for moving to a relational database are: 1. Data Access – Thousands of database access tools from query creation to business intelligence. 2. Management and monitoring – Hundreds of tools for management and monitoring of the database. 3. Leverage all the free tools from relational database vendors. Free Oracle database tools include: -Application Express – WYSIWIG browse based application development and deployment. -SQL Developer – SQL and PL/SQL development. Database object maintenance. What is interesting is that Big Data NoSQL databases and XML databases are taking us back to the days of VSAM (key value databases) with NoSQL and IMS (hierarchical) with XML databases?

    Read the article

  • libgdx actors and instant actions

    - by vaati
    I'm having trouble with actors and actions. I have a list of actors, they all have either no action, or 1 sequence action This sequence action has either : a couple of actions (some are instant, some have duration 0) a couple of actions followed by a parallel action. My problem is the following: some of the instant actions are used to set the position and the alpha of the actor. So when one of the action is "move to x,y and set alpha to 0" the actor is visible for one frame at position 0,0 , move instantly to x,y for the next frame, and then disappears. Though this behaviours is to be expected, I want to avoid it. How can I achieve that? I tried to intercept the actions before I put actors in the stage but I need the stage width/height for some actions. So something like : Action actionSequence = actor.getActions().get(0); Array<Action> actions = ((SequenceAction) actionSequence).getActions(); for(Action act : actions){ if(act.act(0)) System.out.println("action " + act.toString() + " successfully run"); else System.out.println("action " + act.toString() + " wasn't instant"); } won't work. It gets even more complicated when an actor can also have a repeat action in stead of the sequence action (because you have to only run the actions that have duration 0 once without repeat, and then start the repeat). Any help is appreciated.

    Read the article

  • Perfomance of 8 bit operations on 64 bit architechture

    - by wobbily_col
    I am usually a Python / Database programmer, and I am considering using C for a problem. I have a set of sequences, 8 characters long with 4 possible characters. My problem involves combining sets of these sequences and filtering which sets match a criteria. The combinations of 5 run into billions of rows and takes around an hour to run. So I can represent each sequence as 2 bytes. If I am working on a 64 bit architechture will I gain any advantage by keeping these data structures as 2 bytes when I generate the combinations, or will I be as well storing them as 8 bytes / double ? (64 bit = 8 x 8) If I am on a 64 bit architecture, all registers will be 64 bit, so in terms of operations that shouldn´t be any faster (please correct me if I am wrong). Will I gain anything from the smaller storage requirements - can I fit more combinations in memory, or will they all take up 64 bits anyway? And finally, am I likley to gain anything coding in C. I have a first version, which stores the sequence as a small int in a MySQL database. It then self joins the tabe to itself a number of times in order to generate all the possible combinations. The performance is acceptable, depending on how many combinations are generated. I assume the database must involve some overhead.

    Read the article

  • Is there a secure way to add a database troubleshooting page to an application?

    - by Josh Yeager
    My team makes a product (business management software) that our customers install on their own servers. The product uses a SQL database for data storage and app configuration. There have been quite a few cases where something strange happened in the customer's database (caused by bugs in our app and also sometimes admins who mess with the database). To figure out what is wrong with the data, we have to send SQL scripts to the customer and tell them how to run them on the database server. Then, once we know how to fix it, we have to send another script to repair the data. Is there a secure way to add a page in our application that allows an application admin to enter SQL scripts that read and write directly to the database? Our support team could use that to help customers run these scripts, without needing direct access to the SQL server. My big concerns are that someone might abuse this power to get data they shouldn't have and maybe to erase or modify data that they shouldn't be able to modify. I'm not worried about system admins, because they could find another way to do the same thing. But what if someone else got access to the form? Is there any way to do this kind of thing securely?

    Read the article

  • Need to re-build an application - how?

    - by Tom
    For our main system, we have a small monitor application that sits outside our network and periodically tries to log in to verify the system still works. We have a problem with the monitor though in that the communications component set (Asta 3 inside Delphi applications) doesn't always connect through. Overall, I'd say it's about 95% reliable, but that other 5% kills the monitor since it will try to log in and hang on the connection attempt (no timeout in the component). This really isn't an issue on the client side of the system since the clients don't disconnect and reconnect repeatedly on the same application instance, but I need a way to make sure the monitor stays up and continues working even when the component fails on a run. I have a few ideas as to which way to have the program run, the main idea being to put the communications inside a threaded data module so that if one thread crashes then another thread can test later and the program keep going. Does this sound like a valid way to go? Any other ideas how to ensure a reliable monitoring application with a less than 100% reliable component? Thanks. P.S. Not sure these tags are the most appropriate. Tried including "system-reliability" as one, but not high enough rep to create.

    Read the article

  • Wireless drops on HP ENVY dv6 with RT3290 wireless, worked without problem prior to upgrading to Ubuntu 13.10, can it be fixed?

    - by Tim
    I have a HP ENVY dv6 Notebook PC with an AMD A10 quad core and RT3290 wireless. Since I upgraded from Ubuntu 13.04 to 13.10, the wireless connects, but then drops after a few minutes or longer, whether or not I am running openconnect to get through a VPN. If I attempt to run a remote X client (e.g. remote xterm) it drops. If I don't run an X client, it disconnects after a while, requiring a reload of the driver and reconnect. Wireless info... sudo lshw -c network *-network description: Wireless interface product: RT3290 Wireless 802.11n 1T/1R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 68:94:23:a7:09:cb width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rt2800pci driverversion=3.11.0-12-generic firmware=0.37 ip=192.168.1.115 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:55 memory:f0210000-f021ffff I have successfully built and installed the MediaTek driver with no luck on connecting, then the system hangs on reboot and I have to recover/undo the changes to boot successfully.

    Read the article

  • Compiling GCC or Clang for thumb drive on OSX

    - by user105524
    I have a mac book that I don't have admin rights to which I would like to be able to use either GCC or clang. Since I lack admin right I can't install binutils or a compiler to /usr directory. My plan is to install both of these (using an old macbook that I do have admin rights for) to a flash drive and then run the compiler off of there. How would one go building gcc or clang so that it could run just off of a thumb drive? I've tried both but haven't had any success. I've tried doing it defining as many of the directories as possible through configure, but haven't been able to successfully build. My current configure script for gcc-4.8.1 is (where USB20D is the thumb drive): ../gcc-4.8.1/configure --prefix=/Volumes/USB20FD/usr \ --with-local-prefix=/Volumes/USB20FD/usr/local \ --with-native-system-header-dir=/Volumes/USB20FD/usr/include \ --with-as=/Volumes/USB20FD/usr/bin/as \ --enable-languages=c,c++,fortran\ --with-ld=/Volumes/USB20FD/usr/bin/ld \ --with-build-time-tools=/Volumes/USB20FD/usr/bin \ AR=/Volumes/USB20FD/usr/bin/ar \ AS=/Volumes/USB20FD/usr/bin/as \ RANLIB=/Volumes/USB20FD/usr/bin/ranlib \ LD=/Volumes/USB20FD/usr/bin/ld \ NM=/Volumes/USB20FD/usr/bin/nm \ LIPO=/Volumes/USB20FD/usr/bin/lipo \ AR_FOR_TARGET=/Volumes/USB20FD/usr/bin/ar \ AS_FOR_TARGET=/Volumes/USB20FD/usr/bin/as \ RANLIB_FOR_TARGET=/Volumes/USB20FD/usr/bin/ranlib \ LD_FOR_TARGET=/Volumes/USB20FD/usr/bin/ld \ NM_FOR_TARGET=/Volumes/USB20FD/usr/bin/nm \ LIPO_FOR_TARGET=/Volumes/USB20FD/usr/bin/lipo CFLAGS=" -nodefaultlibs -nostdlib -B/Volumes/USB20FD/bin -isystem/Volumes/USB20FD/usr/include -static-libgcc -v -L/Volumes/USB20FD/usr/lib " \ LDFLAGS=" -Z -lc -nodefaultlibs -nostdlib -L/Volumes/USB20FD/usr/lib -lgcc -syslibroot /Volumes/USB20FD/usr/lib/crt1.10.6.o " Any obvious ideas of which of these options need to be turned on to install the appropriate files on the thumb drive during installation? What other magic occurs during xcode installation which isn't occurring here? Thanks for any suggestions

    Read the article

  • loading splash screen takes priority over terminal or windows manager while running elsa

    - by schonjones
    I recently installed e17 and was trying to set up defaults to use elsa and ecomorph over the standard compiz as it constantly crashes since updating to 12.04. If elsa is installed the loading screen hangs and never loads to login, however i can get to a terminal or the e17 login instead of the standard gdm that usually shows up, within a second the screen goes back to the loading screen. I can still type and login as well as run commands in the terminal, but all I see is the loading screen. Switching between terminals i can confirm my commands before it switches back to the loading screen. If i remove elsa the loading screen hangs, but I can get to a terminal login and run lightdm to start my session with no problems. I have multiple DE installed and am unsure which loading screen is coming up. i think it's the KDE screen, grub comes up with a debian background if that helps. I'm not sure if i can switch the loading screen and resolve this issue or if i'm just going to have to scrap using elsa and get lightdm to load on boot again. Elsa would be my preference. I don't have the space to backup my files for a complete reinstall. Please help!

    Read the article

  • Best Method of function parameter validation

    - by Aglystas
    I've been dabbling with the idea of creating my own CMS for the experience and because it would be fun to run my website off my own code base. One of the decisions I keep coming back to is how best to validate incoming parameters for functions. This is mostly in reference to simple data types since object validation would be quite a bit more complex. At first I debated creating a naming convention that would contain information about what the parameters should be, (int, string, bool, etc) then I also figured I could create options to validate against. But then in every function I still need to run some sort of parameter validation that parses the parameter name to determine what the value can be then validate against it, granted this would be handled by passing the list of parameters to function but that still needs to happen and one of my goals is to remove the parameter validation from the function itself so that you can only have the actual function code that accomplishes the intended task without the additional code for validation. Is there any good way of handling this, or is it so low level that typically parameter validation is just done at the start of the function call anyway, so I should stick with doing that.

    Read the article

  • Erlang node acts like it connects, but doesn't [migrated]

    - by Malfist
    I'm trying to setup a distributed network of nodes across a few firewalls and it's not going so well. My application is structured like this: there is a central server that always running a node ([email protected]) and my co-worker's laptops connect to it on startup. This works if we're all in the office, but if someone is at home, they can connect to the masternode, but they fail to connect to the other nodes in the swarm. I.E., erlang fails to gossip correctly. To correct this, I've change epmd's port number and changed the inet_dist_listen ports to a known open port (1755 and 7070 respectively). However, something fishy is going on. I can run net_adm:world() and it reports that it connects to master node, but when I run nodes() I get an empty array. Same with net_adm:ping('[email protected]'). See: Eshell V5.9 (abort with ^G) ([email protected])1> net_adm:world(). ['[email protected]'] ([email protected])2> nodes(). [] ([email protected])3> net_adm:ping('[email protected]'). pong ([email protected])4> nodes(). [] ([email protected])5> What's going on, and how can I fix it?

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >