Search Results

Search found 27932 results on 1118 pages for 'finite state machine'.

Page 594/1118 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • YouTube: Promotional AgroSense Movie

    - by Geertjan
    Here's a cool YouTube promotional movie on AgroSense created by Ordina in the Netherlands. AgroSense is an open source Java system for the precision agriculture industry, which won the IT Environment Award in the Netherlands last week: If your understanding of Dutch limits your appreciation of the movie above, here's a rough translation, together with the names of the speakers in the movie: Precision agriculture, an innovative form of agriculture in which local variations in soil, crop, and atmosphere are taken into account, is the high-tech sustainable agriculture of tomorrow. The use of fertilizer, water, and energy can in this way be significantly reduced. "If, ten or twenty years from now, we are to continue having our agricultural industry in good shape, and in a continuing state of health, we'll need to register and work with data because if we want to enable crops to provide higher value, we'll need to create higher levels of transparency throughout the agriculture chain." Lenus Hamster, farmer in Nieuwolda Groningen "Industry is becoming increasingly data intensive. By combining pragmatic usefulness with innovative sustainability, AgroSense offers the Netherlands the possibility to continue being a leading player in the agrofood sector." Art Lighthart, Architect at Ordina AgroSense offers an open source solution in which all services for precision agriculture are brought together. In 2012, co-operation is being sought with organizations to make AgroSense available to around 10,000 Dutch farmers in the arable crop sector. By the way, the last sentence above implies the NetBeans Platform will be used by around 10,000 Dutch farmers.

    Read the article

  • Dell Alps Touchpad not working

    - by ppls
    I have a Dell 17R SE with Ubuntu 13.04. The touchpad is recognized as PS/2 mouse out of the box, giving just normal touchpad behaviour, but no tap to click, no scrolling, etc. Most answers related to that issue suggest trying an ALPS driver from dahetral.com: http://www.dahetral.com/public-download For installing I followed the steps on this page: https://www.linuxwind.org/html/dell-touchpad-driver-for-ubuntu-13-04.html Now my xinput looks like this: Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2 Mouse id=13 [slave pointer (2)] ? ? AlpsPS/2 ALPS GlidePoint id=14 [slave pointer (2)] ? ? Logitech Bluetooth Mouse M555b id=16 [slave pointer (2)] My touchpad isn't working at all now, just the two hardware buttons for left and right mouse buttons work. Interestingly also the tap to click functions works, but only in the address bar of nautilus, nowhere (!) else. What can I do? I would even switch back to initial state, where at least the basic touchpad functionality worked, i I knew, how to get there.

    Read the article

  • Advice: How to convince my newly annointed team lead against writing the code base from scratch

    - by shan23
    I work in a pretty reknowned MNC, and the module that I work in has been assigned to a new "lead". The code base is pretty huge (~130K or more, with inter dependencies on other modules) , but stable - some parts have grown ugly over the years, but its provably in working state. (Our products are running for years on them, even new ones). The problem is, our lead wants to rewrite the code from scratch, to encompass "finer granularity and a proactive design". I know in my guts thats not a very good idea, but how do I convince him/the rest of the team(who are pretty much more senior than me in terms of years of exp), without sounding too pedantic myself (Thou shalt not rewrite , as Joel et al have clear articles prohibiting it)? I have a good working relation with the person concerned, and don't want to ruin it, but neither do I want to be party to a decision which would surely plague us for years to come !! Any suggestions for a milder,yet effective approach ? Even accounts of how you have tackled such a situation to your liking would help me a lot! EDIT: The code base I'm talking about is not a product/GUI, but at kernel level with all the critical functionalities for our product. I hope now you know why i sound so apprehensive !!

    Read the article

  • Weird behavior when debugging ASP.NET Web application: cookie expires (1/1/0001 12:00AM) by itself on next breakpoint hit.

    - by evovision
    I'm working on ajaxified (Telerik AJAX Manager) ASP.NET application using Visual Studio 2010 (runs with admin privileges) and IIS 7.5. Basically, everything on the page is inside update panels. As for cookies I have custom encrypted "settings" cookie which is added to Response if it's not there on session start. Application runs smoothly, problem was arising when I started the debugging it: Actions:  no breakpoints set, F5 - application has started in debug mode, browser window loaded. I login to site, click on controls, all is fine. Next I set *any* breakpoint somewhere in code, break on it then let it continue running, but once I break again (immediately after first break) and check cookie: it has expired date 1/1/0001 12:00AM and no data in value property. I was storing current language there, which was used inside Page's InitializeCulture event and obviously exception was being raised. I spent several hours trying deleting browser cache, temporary ASP.NET files etc, nothing seemed to work. Same application has been tested on exactly same environment on another PC and no problems with debugging there. After all I've found the solution: visual studio generates for every solution additional .suo file where additional settings are stored, like UI state, breakpoints info, etc, so I deleted it and loaded project again, tried debugging - everything is ok now.

    Read the article

  • What's wrong with my ext4 partition?

    - by bumbling fool
    What is wrong with this picture? Top is output from "df -h", bottom is gparted. I suspect I'm missing a lot of free space. No problems other than that (yet). Can somebody suggest the best (non-destructive) way to correct this? sudo dumpe2fs -h /dev/sda3: (source http://pastebin.com/nAvrdT4E) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 9f6eff64-60d7-4eec-81d5-1e8acd818b38 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 1602496 Block count: 6406144 Reserved block count: 320306 Free blocks: 4842284 Free inodes: 1361222 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1022 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8176 Inode blocks per group: 511 RAID stride: 32692 Flex block group size: 16 Filesystem created: Sun Nov 8 18:18:13 2009 Last mount time: Tue Mar 1 01:04:27 2011 Last write time: Mon Feb 28 04:27:34 2011 Mount count: 16 Maximum mount count: 28 Last checked: Thu Feb 24 06:23:39 2011 Check interval: 15552000 (6 months) Next check after: Tue Aug 23 07:23:39 2011 Lifetime writes: 227 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 268015 Default directory hash: half_md4 Directory Hash Seed: cc101517-e617-482b-a883-a72919419c84 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x001d3000 Journal start: 7787 fdisk and parted output per requests: http://pastebin.com/EGVH7Ken

    Read the article

  • C++ Iterator lifetime and detecting invalidation

    - by DK.
    Based on what's considered idiomatic in C++11: should an iterator into a custom container survive the container itself being destroyed? should it be possible to detect when an iterator becomes invalidated? are the above conditional on "debug builds" in practice? Details: I've recently been brushing up on my C++ and learning my way around C++11. As part of that, I've been writing an idiomatic wrapper around the uriparser library. Part of this is wrapping the linked list representation of parsed path components. I'm looking for advice on what's idiomatic for containers. One thing that worries me, coming most recently from garbage-collected languages, is ensuring that random objects don't just go disappearing on users if they make a mistake regarding lifetimes. To account for this, both the PathList container and its iterators keep a shared_ptr to the actual internal state object. This ensures that as long as anything pointing into that data exists, so does the data. However, looking at the STL (and lots of searching), it doesn't look like C++ containers guarantee this. I have this horrible suspicion that the expectation is to just let containers be destroyed, invalidating any iterators along with it. std::vector certainly seems to let iterators get invalidated and still (incorrectly) function. What I want to know is: what is expected from "good"/idiomatic C++11 code? Given the shiny new smart pointers, it seems kind of strange that STL allows you to easily blow your legs off by accidentally leaking an iterator. Is using shared_ptr to the backing data an unnecessary inefficiency, a good idea for debugging or something expected that STL just doesn't do? (I'm hoping that grounding this to "idiomatic C++11" avoids charges of subjectivity...)

    Read the article

  • Finish feature reverted commits from develop

    - by marco-fiset
    I am using git as a version control system, and using git-flow as the branching model. I started a feature branch some weeks ago in order to maintain the system in a clean state while developping that feature. The main development continued on the develop branch, and changes from develop were merged periodically into the feature, to keep it up to date as much as possible. However came the time where the feature was finished, and I used git-flow's finish feature to merge the feature back into develop. The merge was successfully done, but then I found out that some of the commits I made in develop were reverted by the merge commit! Nowhere in develop or in the feature branch were these changes reverted, I can't see any commit that overwrote them. I just can't find anything. The only theory I have for the moment is that git is failing on me, but that would be extremely unlikely. Maybe I did some kind of wrong manipulation that made this situation come true? I can trace back in the history when the commit was made. I can see that the changes from that commit were reverted by the merge commit. Nowhere in the branch I see a commit that reverts those changes. Yet they were reverted. How is this even possible?

    Read the article

  • Connecting Clinical and Administrative Processes: Oracle SOA Suite for Healthcare Integration

    - by Mala Ramakrishnan
    One of the biggest IT challenges facing today’s health care industry is the difficulty finding reliable, secure, and cost-effective ways to exchange information. Payers and providers need versatile platforms for enterprise-wide information sharing. Clinicians require accurate information to provide quality care to patients while administrators need integrated information for all facets of the business operation. Both sides of the organization must be able to access information from research and development systems, practice management systems, claims systems, financial systems, and many others. Externally, these organizations must share claims data, patient records, pharmaceutical data, lab reports, and diagnostic information among third party entities—all while complying with emerging standards for formatting, processing, and storing electronic health records (EHR). Service-oriented architecture (SOA) enables developers to integrate many types of software applications, databases and computing platforms within a particular health network as well as with community, state, and national health information exchanges. The Oracle SOA Suite for healthcare integration is designed to provide healthcare organizations with comprehensive integration capabilities within a unified middleware platform, as well as with healthcare libraries and templates for streamlining healthcare IT projects. It reduces the need for specialized skills and enforces an enterprise-wide view of critical healthcare data.  Here is a new white paper that details more about this offering: Oracle SOA Suite for Healthcare Integration

    Read the article

  • Fixed timestep with interpolation in AS3

    - by Jim Sreven
    I'm trying to implement Glenn Fiedler's popular fixed timestep system as documented here: http://gafferongames.com/game-physics/fix-your-timestep/ In Flash. I'm fairly sure that I've got it set up correctly, along with state interpolation. The result is that if my character is supposed to move at 6 pixels per frame, 35 frames per second = 210 pixels a second, it does exactly that, even if the framerate climbs or falls. The problem is it looks awful. The movement is very stuttery and just doesn't look good. I find that the amount of time in between ENTER_FRAME events, which I'm adding on to my accumulator, averages out to 28.5ms (1000/35) just as it should, but individual frame times vary wildly, sometimes an ENTER_FRAME event will come 16ms after the last, sometimes 42ms. This means that at each graphical redraw the character graphic moves by a different amount, because a different amount of time has passed since the last draw. In theory it should look smooth, but it doesn't at all. In contrast, if I just use the ultra simple system of moving the character 6px every frame, it looks completely smooth, even with these large variances in frame times. How can this be possible? I'm using getTimer() to measure these time differences, are they even reliable?

    Read the article

  • How to subtract 1 from a orginal count in an ASP.NET gridview

    - by SAMIR BHOGAYTA
    I have a gridview that contains a count (whic is Quantity) were i have a button that adds a row under the orginal row and i need the sub row's count (Quantity) to subtract one from the orgianl row Quantity. EX: Before button click Orgianl row = 3 After click Orginal row = 2 Subrow = 1 Code: ASP.NET // FUNCTION : Adds a new subrow protected void gvParent_RowCommand(object sender, GridViewCommandEventArgs e) { if (e.CommandName.Equals("btn_AddRow", StringComparison.OrdinalIgnoreCase)) { // Get the row that was clicked (index 0. Meaning that 0 is 1, 1 is 2 and so on) // Objects can be null, Int32s cannot not. // Int16 = 2 bytes long (short) // Int32 = 4 bytes long (int) // Int64 = 8 bytes long (long) int i = Convert.ToInt32(e.CommandArgument); // create a DataTable based off the view state DataTable dataTable = (DataTable)ViewState["gvParent"]; for (int part = 0; part 1) { dataTable.Rows[part]["Quantity"] = oldQuantitySubtract - 1; // Instert a new row at a specific index DataRow dtAdd = dataTable.NewRow(); for (int k = 0; k dtAdd[k] = dataTable.Rows[part][k]; dataTable.Rows.InsertAt(dtAdd, i + 1); break; //dataTable.Rows.Add(dtAdd); } } // Rebind the data gvParent.DataSource = dataTable; gvParent.DataBind(); } }

    Read the article

  • BI&EPM in Focus - November 2011

    - by Mike.Hallett(at)Oracle-BI&EPM
    Enterprise Performance Management A Thing of Beauty, by Alison WeissAvon’s enterprise performance management system delivers accurate information and critical insight to managers at every level of the organization Oracle Crystal Ball Helps Managers Guard Against Volatility, by Alison Weiss The Insight Game, by Aaron LazenbyEnterprise performance management can deliver insights crucial to navigating the volatility of the global economy—and that’s no game of checkers. KPI vs. the Bottom Line, by Edward RoskeFor managers, is tracking the key metrics for their departments enough to ensure success for the entire business? The CEO for Oracle partner interRel shares his opinion. Deep Integration, by Aaron LazenbyThe synthesis of Oracle Hyperion applications and core Oracle technologies can deliver deep benefits to analytics-driven businesses. Oracle Crystal Ball. Oracle's #1 Solution for Risk Management Follow EPM Documentation at Hyperion EPM Info for news about EPM documentation releases and updates (twitter | facebook | Linkedin) Whitepaper: Integrating XBRL Into Your Financial Reporting Process Oracle Hyperion Disclosure Management Customer Story: StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Sherwin-Williams Argentina I.C.S.A. Accelerates Budget Preparation Process by 75% BBDO Germany GmbH Consolidates Financial and Planning Processes for More Than 50 Agencies StealthGas Inc. Saves 12 Accountant Days Yearly, Validates XBRL-Compliant Financial Filing Data in One Day Business Intelligence Webcast Replay: Oracle Data Mining & BI EE - Predictive Analytics (Part 2) Innovation Award Winners - BI/EPM: HealthSouth, State of MD, Clorox Company, Telenor and Dunkin Brands Leeds Teaching Hospitals National Health Service Trust Builds Budget Reports Six Times Faster, Achieves 100% ROI in 12 Months with Oracle Business Intelligence Home Credit Group Consolidates Reporting and Saves Time across All Business Units w/ Oracle Essbase & OBIEE Autoglass Improves Business Visibility and Services to Customers and Partners with Oracle Business Intelligence Events Download Oracle OpenWorld Oct 2011 Presentations select Middleware - BI or Applications - Hyperion Oracle Business Analytics Summits:learn about the latest trends, best practices, and innovations in business intelligence, analytics applications, and data warehousing Webcast Nov 15 9am PST: Running the Last Mile, Beyond Financial Consolidations - Streamlining the Close and Addressing the SEC's XBRL Mandate Webcast Dec 13 1pm PST: Defining Your Mobile BI Strategy (BICG) New Training Available: Oracle BI Publisher 11g R1: Fundamentals Webcast Replay: How to Expand the Usage of Analytics in your Organization while Driving Down IT Spend Webcast Replay: Real-Time Decisions (RTD) Updated Use Cases for Ecommerce Personalization in Financial Services & Retail

    Read the article

  • MVC and individual elements of the model under a common base class

    - by Stewart
    Admittedly my experience of using the MVC pattern is limited. It might be argued that I don't really separate the V from the C, though I keep the M separate from the VC to the extent I can manage. I'm considering the scenario in which the application's model includes a number of elements that have a common base class. For example, enemy characters in a video game, or shape types in a vector graphics app. The view wants to render these elements. Of course, the different subclasses call for different rendering. The problem is that the elements are part of the model. Rendering them is conceptually part of the view. But how they are to be rendered depends on parameters of both: Attributes and state of the element are parameters of the model User settings are parameters of the view - and to support multiple platforms and/or view modes, different views may be used What's your preferred way of dealing with this? Put the rendering code in the model classes, passing in any view parameters? Put the rendering code in the view, using a switch or similar to select the right rendering for the model element type? Have some intermediate classes as a model-view interface, of which the model will create objects on demand and the view will then render them? Something else?

    Read the article

  • KVM not installed?

    - by NJRandy
    When I run virt-manager, and click on the icon to create a new virtual machine, I get an error that KVM is not installed or is not loaded. I use Ubuntu GNOME 14.04 All qemu packages are version 2.0.0+dfsg-2ubuntu1 qemu-kvm and many other qemu packages installed... libvirt packages: 1.2.2-0ubuntu13.1 libvirt0 libvirt-bin libvirt-doc python-libvirt virt-manager 0.9.5-1ubuntu3 When I open terminal and enter lsmod | grep kvm I get nothing returned. No lines showing kvm or kvm_amd and no error of any kind. Hardware: Tyan S2877 with dual Opteron 285s I have the latest bios and don't see any setting in there to turn virtualization on or off. when I run sudo apt-get -s install qemu-kvm Here are the results: Reading package lists... Done Building dependency tree Reading state information... Done qemu-kvm is already the newest version. The following packages were automatically installed and are no longer required: kde-l10n-engb libgtk2-gladexml-perl libqt4-test libvncserver0 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. @jobin: the problem was my hardware. I just bought it a few months ago, although obviously used LOL

    Read the article

  • Which Open Source Licenses can address concerns for an Open Source Game Engine?

    - by Chris
    I am on a team that is looking to open source an engine we are building. It's intended as an engine for Online RPG style games. We're writing it to work on both desktops and android platforms. I've been over to the OSI http://opensource.org/licenses/category to check out the most common licenses. However, this will be my first time going into an open source project and I wanted to know if the community had some insight into which licenses might be best suited. Key licensing concerns: Removing or limiting our liability (most already seem to cover this, but stating for completeness). We want other developers to be able to take part or all of our project and use it in their own projects with proper accreditation to our project. Licensing should not hinder someone's ability to quickly use the engine. They should be able to download a release and start using it without needing to wait on licensing issues. Game content (gfx, sound, etc.) that is not part of the engine should be allowed to be licensed separately. If someone is using our engine, they can retain full copy right of their content, including engine generated data. Our primary goal is exposure, which is why we're going open source to start with. Both for the project and for the individuals developing it. Are there any licenses that can require accreditation visible to players? While I'd put our primary goal as exposure, for licensing the accreditation is less of a concern. From what I've read through (and have been able to understand) it doesn't seem like any of the licenses cover anything that is produced by the licensed software. Are there any that state this specifically, or does simply not mentioning it leave it open for other licensing? Are there any other concerns that we should consider? Has anyone had any issues using any of these licenses?

    Read the article

  • How to manage drawing loop when changing render targets

    - by George Duckett
    I'm managing my game state by having a base GameScreen class with a Draw method. I then have (basically) a stack of GameScreens that I render. I render the bottom one first, as screens above might not completely cover the ones below. I now have a problem where one GameScreen changes render targets while doing its rendering. Anything the previous screens have drawn to the backbuffer is lost (as XNA emulates what happens on the xbox). I don't want to just set the backbuffer to preserve its contents as I want this to work on the xbox as well as PC. How should I manage this problem? A few ideas I've had: Render every GameScreen to its own render target, then render them all to the backbuffer. Create some kind of RenderAction queue where a game screen (and anything else I guess) could queue something to be rendered to the back buffer. They'd render whatever they wanted to any render target as normal, but if they wanted to render to the backbuffer they'd stick that in a queue which would get processed once all rendertarget rendering was done. Abstract away from render targets and backbuffers and have some way of representing the way graphics flows and transforms between render targets and have something manage/work out the correct rendering order (and render targets) given what rendering process needs as input and what it produces as output. I think each of my ideas have pros and cons and there are probably several other ways of approaching this general problem so I'm interested in finding out what solutions are out there.

    Read the article

  • Running multiple box2D world objects on a server

    - by CharbelAbdo
    I'm creating a multiplayer game using LibGdx (with Box2d) and Kryonet. Since this is the first time I work on multiplayer games, I read a bit about server - client implementations, and it turns out that the server should handle important tasks like collision detection, hits, characters dying etc... Based on some articles (like the excellent Gabriel Gambetta Fast paced multiplayer series), I also know that the client should work in parallel to avoid the lag while the server responds to commands. Physics wise, each game will have 2 players, and any projectiles fired. What I'm thinking of doing is the following: Create a physics world on the client When the game is signaled to start, I create the same physics world on the server (without any rendering obviously). Whenever the player issues a command (move or fire), I send the command to the server and immediately start processing it on the client. When the server receives the command, it applies it on the server's world (set velocity etc...) Each 100ms, the server sends the new state to the client which corrects what was calculated locally. Any critical action (hit, death, level up) is calculated only on the server and sent to the client. Essentially, I would have a Box2d World object running on the server for each game in progress, in sync with the worlds running on the clients. The alternative would be to do my own calculations on the server instead of relying on Box2D to do them for me, but I'm trying to avoid that. My question is: Is it wise to have, for example, 1000 instances of the World object running and executing steps on the server? Tomcat used around 750 MBytes of memory when trying it without any object added to the world. Anybody tried that before? If not, is there any alternative? Google did not help me, are there any guidelines to use when you want to have physics on both the client and the server? Thanks for any help.

    Read the article

  • Can't update kernel to 2.6.35.27

    - by Uri Herrera
    When I try to update I get this message, I'm guessing I'm missing something here? Filesystem Type Size Used Avail Use% Mounted on /dev/sdb6 ext4 43G 7.7G 33G 20% / none devtmpfs 1.6G 349k 1.6G 1% /dev none tmpfs 1.6G 5.9M 1.6G 1% /dev/shm none tmpfs 1.6G 218k 1.6G 1% /var/run none tmpfs 1.6G 0 1.6G 0% /var/lock /dev/sdb2 fuseblk 258G 198G 60G 77% /media/Backup /dev/sda1 fuseblk 321G 175G 146G 55% /media/Media /dev/sdb1 ext4 96M 84M 6.7M 93% /boot /dev/sdb7 ext4 175G 81G 86G 49% /home Here's the output: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.35-22-generic 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 5 not fully installed or removed. After this operation, 107MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 282211 files and directories currently installed.) Removing linux-image-2.6.35-22-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic /etc/default/grub: 23: Syntax error: newline unexpected run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.35-22- generic.postrm line 328. dpkg: error processing linux-image-2.6.35-22-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.35-22-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • 'Xojo' is the only application that I can't install

    - by Gichan
    I can't install xojo. When I click install in the software center it's not progressing. In the terminal it's stuck in : gichan02@gichan02-Latitude-D520:~$ sudo apt-get install xojo [sudo] password for gichan02: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xojo-bin The following NEW packages will be installed: xojo xojo-bin 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 209 MB/209 MB of archives. After this operation, 596 MB of additional disk space will be used. Do you want to continue? [Y/n] Y 0% [Working] then after waiting for an hour for progress it says: Failed to fetch https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu/pool/main/x/xojo/xojo-bin_2013.41-0ubuntu1_i386.deb Could not resolve host: private-ppa.launchpad.net So I added apt repository for 'private-ppa': deb https://ging-giana:[email protected]/commercial-ppa-uploaders/xojo/ubuntu trusty main Then when I try 'apt-get update': GPG error: https://private-ppa.launchpad.net trusty Release: The following signatures were invalid: NODATA 2 Then I noticed something the Software Sources:Other software TAB: Added by software-center; credentials stored in /etc/apt/auth.conf https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu So i go to the '/etc/apt/auth.conf' ,but It cannot be opened and it is not a keyserver. So i uncheck: Added by software-center; credentials stored in /etc/apt/auth.conf https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu GPG error was gone. But then again I found myself at the beginning of the problem.STUCK at '0% [Working]'. 'Xojo' is the only application that I can't install.Any explanation why is it like that?

    Read the article

  • Brocken package for libavcodec54 & libx264-123 in ubuntu 14.04LTS

    - by Kachavarapu Ajay
    $ sudo apt-get install -f [sudo] password for ajay: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libx264-123 The following NEW packages will be installed: libx264-123 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/345 kB of archives. After this operation, 1,005 kB of additional disk space will be used. Do you want to continue? [Y/n] y (Reading database ... 166965 files and directories currently installed.) Preparing to unpack .../libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb ... Unpacking libx264-123:amd64 (2:0.123.2189+git35cf912-1ubuntu4) ... dpkg-deb (subprocess): decompressing archive member: lzma error: compressed data is corrupt dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing archive /var/cache/apt/archives/libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb (--unpack): cannot copy extracted data for './usr/lib/x86_64-linux-gnu/libx264.so.123' to '/usr/lib/x86_64-linux-gnu/libx264.so.123.dpkg-new': unexpected end of file or stream Errors were encountered while processing: /var/cache/apt/archives/libx264-123_0.123.2189+git35cf912-1ubuntu4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Cleaning your BizTalk Build Server

    - by Michael Stephenson
    Just a little note for myself this one.At one of my customers where it is still BizTalk 2006 one of the build servers is intermittently getting issues so I wanted to run a script periodically to clean things up a little.  The below script is an example of how you can stop cruise control and all of the biztalk services, then clean the biztalk databases and reset the backup process and then click everything off again.This should keep the server a little cleaner and reduce the number of builds that occasionally fail for adhoc environmental issues.REM Server Clean ScriptREM =================== REM This script is ran to move the build server back to a clean state echo Stop Cruise Controlnet stop CCService echo Stop IISiisreset /stop echo Stop BizTalk Servicesnet stop BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Stop SSOnet stop ENTSSO echo Stop SQL Job Agentnet stop SQLSERVERAGENT echo Clean Message Boxsqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_CleanupMsgbox"sqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_PurgeSubscriptions"  echo Clean Tracking Databasesqlcmd -E -d BizTalkDTADb -Q "Exec dtasp_CleanHMData" echo Reset TDDS Stream Statussqlcmd -E -d BizTalkDTADb -Q "Update TDDS_StreamStatus Set lastSeqNum = 0" echo Force Full Backupsqlcmd -E -d BizTalkMgmtDB -Q "Exec sp_ForceFullBackup" echo Clean Backup Directorydel E:\BtsBackups\*.* /q  echo Start SSOnet start ENTSSO echo Start SQL Job Agentnet start SQLSERVERAGENT echo Start BizTalk Servicesnet start BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Start IISiisreset /start echo Start Cruise Controlnet start CCService

    Read the article

  • Ubuntu 12.04 upgrade and thunderbird

    - by Dcm1405
    After applying the suggested updates (179) an error message at the very end of the process suggested me to run apt-get install -f. Since it is a fairly new Ubuntu install (x86) I didn't setup anything in Thunderbird yet. Different error messages (see details) were generated with the -f process: ~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: thunderbird Suggested packages: latex-xft-fonts The following packages will be upgraded: thunderbird 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/20.8 MB of archives. After this operation, 594 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 170457 files and directories currently installed.) Preparing to replace thunderbird 11.0.1+build1-0ubuntu2 (using .../thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb) ... Unpacking replacement thunderbird ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: invalid code lengths set' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives /thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/thunderbird/libxul.so' Errors were encountered while processing: /var/cache/apt/archives/thunderbird_12.0.1+build1-0ubuntu0.12.04.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • SQL SERVER – Using RAND() in User Defined Functions (UDF)

    - by pinaldave
    Here is the question I received in email. “Pinal, I am writing a function where we need to generate random password. While writing T-SQL I faced following issue. Everytime I tried to use RAND() function in my User Defined Function I am getting following error: Msg 443, Level 16, State 1, Procedure RandFn, Line 7 Invalid use of a side-effecting operator ‘rand’ within a function. Here is the simplified T-SQL code of the function which I am using: CREATE FUNCTION RandFn() RETURNS INT AS BEGIN DECLARE @rndValue INT SET @rndValue = RAND() RETURN @rndValue END GO I must use UDF so is there any workaround to use RAND function in UDF.” Here is the workaround how RAND() can be used in UDF. The scope of the blog post is not to discuss the advantages or disadvantages of the function or random function here but just to show how RAND() function can be used in UDF. RAND() function is directly not allowed to use in the UDF so we have to find alternate way to use the same function. This can be achieved by creating a VIEW which is using RAND() function and use the same VIEW in the UDF. Here is the step by step instructions. Create a VIEW using RAND function. CREATE VIEW rndView AS SELECT RAND() rndResult GO Create a UDF using the same VIEW. CREATE FUNCTION RandFn() RETURNS DECIMAL(18,18) AS BEGIN DECLARE @rndValue DECIMAL(18,18) SELECT @rndValue = rndResult FROM rndView RETURN @rndValue END GO Now execute the UDF and it will just work fine and return random result. SELECT dbo.RandFn() GO In T-SQL world, I have noticed that there are more than one solution to every problem. Is there any better solution to this question? Please post that question as a comment and I will include it with due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: technology

    Read the article

  • why is glVertexAttribDivisor crashing?

    - by 2am
    I am trying to render some trees with instancing. This is rather weird, but before sleeping yesterday night, I checked the code, and it was in a running state, when I got up this morning, it is crashing when I am calling glVertexAttribDivisor I haven't changed any code since yesterday. Here is how I am sending data to GPU for instancing. glGenBuffers(1, &iVBO); glBindBuffer(GL_ARRAY_BUFFER, iVBO); glBufferData(GL_ARRAY_BUFFER, (ml_instance->i_positions.size()*sizeof(glm::vec4)) , NULL, GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER, 0, (ml_instance->i_positions.size()*sizeof(glm::vec4)), &ml_instance->i_positions[0]); And then in vertex specification-- glBindBuffer(GL_ARRAY_BUFFER, iVBO); glVertexAttribPointer(i_positions, 4, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(i_positions); glVertexAttribDivisor(i_positions,1); // **THIS IS WHERE THE PROGRAM CRASHES** glDrawElementsInstanced(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, 0,TREES_INSTANCE_COUNT); I have checked ml_instance->i_positions, it has all the data that needs to render. I have checked the value of i_positions in vertex shader, it is the same as whatever I have defined there. I am little out of ideas here, everything looks pretty much fine. What am I missing?

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >