Search Results

Search found 2456 results on 99 pages for 'atomic swap'.

Page 3/99 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Swap implication in Linux and way to increase it

    - by vimalnath
    I used top command to print this on Linux box: [root@localhost ~]# top top - 23:38:38 up 361 days, 12:16, 2 users, load average: 0.09, 0.06, 0.01 Tasks: 129 total, 2 running, 126 sleeping, 1 stopped, 0 zombie Cpu(s): 0.0% us, 0.2% sy, 0.0% ni, 96.5% id, 3.4% wa, 0.0% hi, 0.0% si Mem: 2074712k total, 1996948k used, 77764k free, 16632k buffers Swap: 1052248k total, 1052248k used, 0k free, 331540k cached I am not sure what Swap:0k free means in the last line. Is this normal behavior for a linux box to have value of 0 Thanks

    Read the article

  • Atomic UPSERT in SQL Server 2005

    - by rabidpebble
    What is the correct pattern for doing an atomic "UPSERT" (UPDATE where exists, INSERT otherwise) in SQL Server 2005? I see a lot of code on SO (e.g. see http://stackoverflow.com/questions/639854/tsql-check-if-a-row-exists-otherwise-insert) with the following two-part pattern: UPDATE ... FROM ... WHERE <condition> -- race condition risk here IF @@ROWCOUNT = 0 INSERT ... or IF (SELECT COUNT(*) FROM ... WHERE <condition>) = 0 -- race condition risk here INSERT ... ELSE UPDATE ... where will be an evaluation of natural keys. None of the above approaches seem to deal well with concurrency. If I cannot have two rows with the same natural key, it seems like all of the above risk inserting rows with the same natural keys in race condition scenarios. I have been using the following approach but I'm surprised not to see it anywhere in people's responses so I'm wondering what is wrong with it: INSERT INTO <table> SELECT <natural keys>, <other stuff...> FROM <table> WHERE NOT EXISTS -- race condition risk here? ( SELECT 1 FROM <table> WHERE <natural keys> ) UPDATE ... WHERE <natural keys> (Note: I'm assuming that rows will not be deleted from this table. Although it would be nice to discuss how to handle the case where they can be deleted -- are transactions the only option? Which level of isolation?) Is this atomic? I can't locate where this would be documented in SQL Server documentation.

    Read the article

  • Atomic Instructions and Variable Update visibility

    - by dsimcha
    On most common platforms (the most important being x86; I understand that some platforms have extremely difficult memory models that provide almost no guarantees useful for multithreading, but I don't care about rare counter-examples), is the following code safe? Thread 1: someVariable = doStuff(); atomicSet(stuffDoneFlag, 1); Thread 2: while(!atomicRead(stuffDoneFlag)) {} // Wait for stuffDoneFlag to be set. doMoreStuff(someVariable); Assuming standard, reasonable implementations of atomic ops: Is Thread 1's assignment to someVariable guaranteed to complete before atomicSet() is called? Is Thread 2 guaranteed to see the assignment to someVariable before calling doMoreStuff() provided it reads stuffDoneFlag atomically? Edits: The implementation of atomic ops I'm using contains the x86 LOCK instruction in each operation, if that helps. Assume stuffDoneFlag is properly cleared somehow. How isn't important. This is a very simplified example. I created it this way so that you wouldn't have to understand the whole context of the problem to answer it. I know it's not efficient.

    Read the article

  • "pseudo-atomic" operations in C++

    - by dan
    So I'm aware that nothing is atomic in C++. But I'm trying to figure out if there are any "pseudo-atomic" assumptions I can make. The reason is that I want to avoid using mutexes in some simple situations where I only need very weak guarantees. 1) Suppose I have globally defined volatile bool b, which initially I set true. Then I launch a thread which executes a loop while(b) doSomething(); Meanwhile, in another thread, I execute b=true. Can I assume that the first thread will continue to execute? In other words, if b starts out as true, and the first thread checks the value of b at the same time as the second thread assigns b=true, can I assume that the first thread will read the value of b as true? Or is it possible that at some intermediate point of the assignment b=true, the value of b might be read as false? 2) Now suppose that b is initially false. Then the first thread executes bool b1=b; bool b2=b; if(b1 && !b2) bad(); while the second thread executes b=true. Can I assume that bad() never gets called? 3) What about an int or other builtin types: suppose I have volatile int i, which is initially (say) 7, and then I assign i=7. Can I assume that, at any time during this operation, from any thread, the value of i will be equal to 7? 4) I have volatile int i=7, and then I execute i++ from some thread, and all other threads only read the value of i. Can I assume that i never has any value, in any thread, except for either 7 or 8? 5) I have volatile int i, from one thread I execute i=7, and from another I execute i=8. Afterwards, is i guaranteed to be either 7 or 8 (or whatever two values I have chosen to assign)?

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

  • How do I get 12.04 to recognize swap partition so that I can hibernate?

    - by Kayla
    I justed installed 12.04 and used gparted to erase and enlarge my swap partition. When I rebooted, gparted said that the file partition for the swap was unknown. Gparted doesn't let me change the file partition to "linux-swap". It does let me change it to NTFS, but when I reboot, it goes back to "unknown". Thanks in advance for your help. Output from sudo swapon -s: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 9025532 0 -1 Output from sudo fdisk -l: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9d63ac84 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2459647 1228800 7 HPFS/NTFS/exFAT /dev/sda2 2459648 197836472 97688412+ 7 HPFS/NTFS/exFAT /dev/sda3 466890752 488395119 10752184 7 HPFS/NTFS/exFAT /dev/sda4 197836798 466890751 134526977 5 Extended /dev/sda5 197836800 448837631 125500416 83 Linux /dev/sda6 448839680 466890751 9025536 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/mapper/cryptswap1: 9242 MB, 9242148864 bytes 255 heads, 63 sectors/track, 1123 cylinders, total 18051072 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x951b7f53 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table

    Read the article

  • atomic operation cost

    - by osgx
    Hello What is the cost of the atomic operation? How much cycles does it consume? Will it pause other processors on SMP or NUMA, or will it block memory accesses? Will it flush reorder buffer in out-of-order CPU? What effects will be on the cache? Thanks.

    Read the article

  • AtomicSwap instead of AtomicCompareAndSwap ?

    - by anon
    I know that on MacOSX / PosiX systems, there is atomic-compare-and-swap for C/C++ code via g++. However, I don't need the compare -- I just want to atomically swap two values. Is there an atomic swap operation available? [Everythign I can find is atomic_compare_and_swap ... and I just want to do the swap, without comparing]. Thanks!

    Read the article

  • Moving the OS X swap file to a faster drive

    - by Milky Joe
    I have a new Mac Mini that's running the latest version of Snow Leopard. The internal drive is a bit of a slouch. I'd like to move the swap file (or whatever it's called is OS X) to my faster external drive (Firewire 800, permanently connected). Is this possible? I've read that the old solutions aren't working in 10.6. My Mac has 2GB of RAM, so the swap file is used quite a bit when I'm doing intensive work (Photoshop etc).

    Read the article

  • Limiting memory usage and mimimizing swap thrashing on Unix / Linux

    - by camelccc
    I have a few machines that I machine that I use for running large numbers of jobs where I try to limit the number of jobs so as not to exceed the available RAM of the machine. Occasionally I mis-estimate how much memory some of the jobs will take, and the machine starts thrashing the swap file. I resolve this by sending the kill -s STOP to one of the jobs so that it can get swapped out. Does anyone know of a utility that will monitor a server for processes by a specific name, and then pause the one with the smallest memory footprint is the total memory consumption reaches a desired threshold so that the larger ones can run and complete with a minimum of swap file thrashing? Paused processes then need to be resumed once some existing processes have completed.

    Read the article

  • sequentially-consistent atomic load on x86

    - by axe
    Hello all, I'm interested in sequentially-consistent load operation on x86. As far as I see from assembler listing, generated by compiler it is implemented as a plain load on x86, however plain loads as far as I know guaranteed to have acquire semantics, while plain stores are guaranteed to have release. Sequentially-consistent store is implemented as locked xchg, while load as plain load. That sounds strange to me, could you please explain this in details? added Just found in internet, that sequentially-consistent atomic load could be done as simple mov as long as store is done with locked xchg, but there was no prove and no links to documentation. Do you know where can I read about that? Thanks in advance.

    Read the article

  • Clarification of atomic memory access for different OSs

    - by murrekatt
    I'm currently porting a Windows C++ library to MacOS as a hobby project as a learning experience. I stumbled across some code using the Win Interlocked* functions and thus I've been trying to read up on the subject in general. Reading related questions here in SO, I understand there are different ways to do these operations depending on the OS. Interlocked* in Windows, OSAtomic* in MacOS and I also found that compilers have builtin (intrinsic) operations for this. After reading gcc builtin atomic memory access, I'm left wondering what is the difference between intrinsic and the OSAtomic* or Interlocked* ones? I mean, can I not choose between OSAtomic* or gcc builtin if I'm on MacOS when I use gcc? The same if I'd be on Windows using gcc. I also read that on Windows Interlocked* come as both inline and intrinsic versions. What to consider when choosing between intrinsic or inline? In general, are there multiple options on OSs what to use? Or is this again "it depends"? If so, what does it depend on? Thanks!

    Read the article

  • Disadvantages of using a swap file/partition on an SSD, even when swappiness is set to 0

    - by pjv
    What are the disadvantages of using a swap file/partition on an SSD, even when swappiness is set to 0 I'm particularly interested in the /proc/sys/vm/swappiness=0 case. How much writes are still done, in practice, to that swap file, and does it have a negative impact to the SSD or any other disadvantage? Or would it nearly compare to not having a swap file? I am pretty aware of what swappiness=0 means, just not of what it amounts to in practice. My question stems from a problem I am experiencing without a swap: http://stackoverflow.com/questions/4567972/error-executing-aapt-all-of-the-sudden. There are similar questions regarding SSD and swap but they don't go in-depth into the swappiness=0: Disadvantages of not having a swap partition, Should I keep my swap file on an SSD drive?

    Read the article

  • How to recover data from NTFS partition that was made into a Swap partition?

    - by Raghav Mehta
    I have extremely important stuff on my windows partition which during the ubuntu 10.10 installation,when it said that I should create something called swap space, I selected it to be a swap space (without even knowing what it actually meant) The Grub2 doesn't show up so I don't get a choice to boot Ubuntu or Windows. I don't get my windows partition as a removable device in Ubuntu either. When I go to disk utility and select the sda2 (i.e.. my windows partition) and click edit partition and select HPFS/NTFS for the type and tick bootable and click OK the small processing sign keep on rotating on the bottom right of the sda2 in the chart and after about 10 to 15 minutes it gives an unknown error and thus, I am still unable to use my windows. I am even worse than a beginner who doesn't know a thing about Ubuntu so please be patient and help me out.

    Read the article

  • How large of a swap partition is needed to hibernate?

    - by Closure Cowboy
    I've read this question, but it doesn't definitively answer my question. If I want my computer to be able to hibernate, do I need to have a swap partition as large as my RAM, or will Ubuntu wisely be able to hibernate if the swap partition can fit the currently-in-use RAM? I'm about to install Ubuntu on a computer with a lot of RAM, and a relatively small hard drive, so I don't want to use more hard drive space than necessary. I wanted to avoid giving my actual specifications to keep this question more general, though I'll give them if necessary.

    Read the article

  • Atomic operations on several transactionless external systems

    - by simendsjo
    Say you have an application connecting 3 different external systems. You need to update something in all 3. In case of a failure, you need to roll back the operations. This is not a hard thing to implement, but say operation 3 fails, and when rolling back, the rollback for operation 1 fails! Now the first external system is in an invalid state... I'm thinking a possible solution is to shut down the application and forcing a manual fix of the external system, but then again... It might already have used this information (and perhaps that's why it failed), or we might not have sufficient access. Or it might not even be a good way to rollback the action! Are there some good ways of handling such cases? EDIT: Some application details.. It's a multi user web application. Most of the work is done with scheduled jobs (through Quartz.Net), so most operations is run in it's own thread. Some user actions should trigger jobs that update several systems though. The external systems are somewhat unstable. I Was thinking of changing the application to use the Command and Unit Of Work pattern

    Read the article

  • c++11 atomic ordering: extended total order memory_order_seq_cst for locks

    - by itaj
    There's this note in c++11 29.3-p3: [ Note: Although it is not explicitly required that S include locks, it can always be extended to an order that does include lock and unlock operations, since the ordering between those is already included in the "happens before" ordering. - end note ] What does it mean by "always"? I can understand that any certain implementation can be designed to support such an extended S. But in some general implementation that wasn't designed for it, I don't see that S can be extended so. I had sent this question to comp.std.c++ but got no answers there. http://groups.google.com/group/comp.std.c++/browse_frm/thread/5242fa70d0594d1b#

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Swap function for a char*

    - by Martin
    I have the simple function below which swap two characters of an array of characters (s). However, I am getting a "Unhandled exception at 0x01151cd7 in Bla.exe: 0xC0000005: Access violation writing location 0x011557a4." error. The two indexes (left and right) are within the limit of the array. What am I doing wrong? void swap(char* s, int left, int right) { char tmp = s[left]; s[left] = s[right]; s[right] = tmp; } swap("ABC", 0, 1); I am using VS2010 with unmanaged C/C++. Thanks!

    Read the article

  • Windows Swap (Page File): Enable or Disable?

    - by d03boy
    From my personal experience I've noticed that disabling the page file in Windows XP has given me, in general, the most speed gain out of any other software change I can make. Obviously this has to be done when a significant amount of RAM is available. Typically I find that it works nicely with +2GB of RAM. The only issues I've ever really had were loading up Adobe Photoshop. Is this really a speed improvement or am I imagining it? Note: In order to actually turn it off, you must not just set it to 0MB, but disable it. Otherwise Windows will just expand it when it needs to in order to meet its needs.

    Read the article

  • Oracle DB on solaris utilizing swap memory when free RAM available

    - by Ara
    Hi, We have a weird instance where we noticed our oracle database server swap utilization was 100% and surprised to see that the system had free memory available during that period. To my knowledge, swap memory utilization starts once system runs out of free RAM (please correct me if i'm wrong). Not sure what could have caused this unusual activity. Had anyone else experienced such behaviour? Regs,

    Read the article

  • What happens to my Swap space if I choose to replace Windows with Ubuntu?

    - by Ramandeep
    When I install Ubuntu 12.04, I'll be presented with three options: install Ubuntu alongside Windows, replace Windows with Ubuntu and something else. If I choose 'replace Windows', then I cannot make a swap space. So what then? I've 1GB RAM. Will only my C drive get replaced or what will happen? If only C drive is affected, then will data on the other 2 drives get saved? If yes, how can I access it (on which drive of Ubuntu) after installing Ubuntu?

    Read the article

  • What of my Swap space if I choose to replace windows with ubuntu?

    - by Ramandeep
    When I install Ubunut 12.04 I'll be presented with three options - install ubuntu alongside windows, replace windows with ubuntu and 'something else'. If I choose 'replace windows' then I can not be able to make a swap space. So what then. I've 1GB ram. And if I choose 'replace windows' will my only C drive get replaced or what'll happen? If only C drive is affected then will data on the other 2 drives get saved? Again if yes, how can I access it (on which drive of ubuntu) after installing ubuntu?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >