Search Results

Search found 6587 results on 264 pages for 'slow motion'.

Page 205/264 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Terminal server performance over high latency links

    - by holz
    Our datacenter and head office is currently in Brisbane, Australia, and we have a branch office in the UK. We have a private WAN with a 768k link to our UK office and the latency is at about 350ms. The terminal server performance is reeeeealy bad. Applications that don't have too much animation or any images seem to be okay. But as soon as they do, the session is almost unusable. Powerpoint and internet explorer are good examples of apps that make it run slow. And if there is an image in your email signature, outlook will hang for about 10 seconds each time a new line is inserted, while the image gets moved down a few pixels. We are currently running server 2003. I have tried Server 2008 R2 RDS, and also a third party solution called Blaze by a company called Ericom, but it is still not too much better. We currently have a 5 levels dynamic class of service with the priority in the following order. VoIP Video Terminal Services Printing Everything else When testing the terminal server performance, the link monitored using net-flows, and have plenty we of bandwidth available, so I believe that it is a latency issue rather than bandwidth. Is there anything that can be done to improve performance. Would citrix help at all?

    Read the article

  • What benchmark tool to use to benchmark hardware for VM server?

    - by Mark0978
    We are setting up a new piece of hardware to virtualize several of our servers on. Choices are RAID 5, RAID 6, and RAID 0+1. We are wanting to benchmark all three before we go live with the machine, but I'm not sure how to test the speed. Since we will be using it to host VMs, what will the actual disk traffic look like? What can I use to see if RAID 6 is too slow? Short of setting up the system with all the VM's on it and running that way, then redoing on all the work, I'm not sure how to test it. It them becomes more of a subjective test than an objective one. I'm worried that RAID6 will have too much overhead, that RAID5 will be to fragile with 3TB drives and I've never worked with 0+1 at all. So in short I'd like to setup the base machine (which will be running Linux) and then test the underlying SW RAID for speed. What kind of tool exists to simulate this kind of load? Barring the lack of a specific tool, how about a generic FS testing tool that will simulate different loads?

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Logging hurts MySQL performance - but, why?

    - by jimbo
    I'm quite surprised that I can't see an answer to this anywhere on the site already, nor in the MySQL documentation (section 5.2 seems to have logging otherwise well covered!) If I enable binlogs, I see a small performance hit (subjectively), which is to be expected with a little extra IO -- but when I enable a general query log, I see an enormous performance hit (double the time to run queries, or worse), way in excess of what I see with binlogs. Of course I'm now logging every SELECT as well as every UPDATE/INSERT, but, other daemons record their every request (Apache, Exim) without grinding to a halt. Am I just seeing the effects of being close to a performance "tipping point" when it comes to IO, or is there something fundamentally difficult about logging queries that causes this to happen? I'd love to be able to log all queries to make development easier, but I can't justify the kind of hardware it feels like we'd need to get performance back up with general query logging on. I do, of course, log slow queries, and there's negligible improvement in general usage if I disable this. (All of this is on Ubuntu 10.04 LTS, MySQLd 5.1.49, but research suggests this is a fairly universal issue)

    Read the article

  • Rails/Mongo across multiple different geo-regions

    - by wmarbut
    I have a system that by necessity requires physical presence in three or more different locations and I need advice on structuring in such a way that my database stays replicated in a timely manner without horrible latency. I've seen mysql access and replication be incredibly slow when the application server was trying to talk to a node that wasn't physically collocated. In this case I am using mongodb. The stack is linux/passenger/ruby/rails/mongodb. The database is write heavy and read light. The infrastructure is Amazon EC2 The application layer must be physically located in 3 or more different locations. I can't justify this requirement further than it is a requirement. The database, however needn't be located in more than one location if it can be written to quickly from other locations. From reading mongo's documentation, mongo replication seems like more of a candidate than sharding b/c my datastore is not huge. However I don't see anything that addresses the issue of speed for servers communicating across large distances with potentially high latency.

    Read the article

  • Strange performance differences in read/write from/to USB flash drive

    - by Mario De Schaepmeester
    When copying files from my 8GB USB 2.0 flash drive with Windows 7 to a traditional hard drive, the average speed is between 25 and 30 MB/s. When doing the reverse, copying to the USB drive, the speed is 5MB/s average. I have tested this with about 4.5GB of files, a mixture of smaller and larger ones. The observations were the same on both FAT32 and exFAT file systems on the USB drive, NTFS on the internal hard disk. I don't think I can be mistaken in saying that flash memory has a lot higher performance than a spinning hard drive in both terms of reading and writing. For both memory types, reading should be faster than writing too. Now I wonder, how can it be that copying files from a fast read memory to a faster write memory is actually slower than copying files from a fast read memory to a slow write memory? I think that the files are stored in RAM before being copied over too, and there's caching as well, but I don't see how even that could tip the balance. It can only be in the advantage of writing to the USB drive, since it is "closer" to the SATA system than the USB port and it will receive data from the internal SATA HDD faster. Perhaps my way of thinking is all wrong or it just depends on the manufacturer of the USB pen. But I am curious.

    Read the article

  • Packet drop measured by ethtool, tcpdump and ifconfig

    - by Rayne
    Hi all, I have a question regarding packet drops. I am running a test to determine when packet drops occur. I'm using a Spirent TestCenter through a switch (necessary to aggregate Ethernet traffic from 5 ports to one optical link) to a server using a Myricom card. While running my test, if the input rate is below a certain value, ethtool does not report any drop (except dropped_multicast_filtered which is incrementing at a very slow rate). However, tcpdump reports X number of packets "dropped by kernel". Then if I increase the input rate, ethtool reports drops but "ifconfig eth2" does not. In fact, ifconfig doesn't seem to report any packet drops at all. Do they all measure packet drops at different "levels", i.e. ethtool at the NIC level, tcpdump at the kernel level etc? And am I right to say that in the journey of an incoming packet, the NIC level is the "so-called" first level, then the kernel, then the user application? So any packet drop is likely to happen first at the NIC, then the kernel, then the user application? So if there is no packet drop at the NIC, but packet drop at the kernel, then the bottleneck is not at the NIC? Thank you. Regards, Rayne

    Read the article

  • How to compare old CPU to new CPU?

    - by Lasse V. Karlsen
    I hope this question doesn't get closed at once :) I have an old laptop, a Compaq NC4200, which is going its final laps around the track these days. Battery is dead, and everything kinda runs slow. It also has only 1GB of memory, and even though I don't know if it can take more, I probably wouldn't be able to get hold of any that matches without having to special order it. The size, however, has been ideal for my usage pattern, so I'm looking to replace it with a similarly sized laptop, at least in the same size category. However, it's been a while since I tried keeping track of CPUs, so I have a question. The old laptop has a Intel Pentium M 760 1.86GHz processor. One laptop I found online has a Intel Pentium SU4100 1.3GHz dual-core. This type of processor seems to be quite common in the price and size-range I've been looking. What kind of relative performance boost could I expect from the old one to the new one? I am not expecting a "about 7.45x speed", but some indication would be nice. For instance, dual-core tells me it might be akin to 2.6GHz, but I assume I can't simply compare 1.86GHz to 2.6GHz and expect the new one to run about 1.4x as fast, I expect more these days. Or is that unrealistic for this kind of processor? Do I need to up my price range and go for a 2+ GHz processor?

    Read the article

  • Querying a CSV file

    - by sheepsimulator
    Does anyone know of a simple tool that will open up a CSV file and let you do basic, SQLesque queries on it? Like a graphical tool of sorts, one that is easy to use. I know I could write a small script to do an import of the CSV into a SQLite database, but since I imagine someone else thought of this before me, I just wanted to inquire if one existed. What's prompting this question is I am getting frustrated with Excel's limited filtering capabilities. Perhaps some other data visualization manipulation tool would provide similar functionality. Free or OSS is preferred, but I'm open to any suggestions. EDIT: I really would prefer some clear tutorials on how to do the below instead of just "make your sheet an ODBC entry" or "write programs using ODBC files", or more ideas on apps to use. Note: I cannot use MS Access. Yet another EDIT: I'm still open for solutions using SQLite. My platform is a semi-ancient Win2k laptop, with a P4 on it. It's quite slow, so a resource-light solution is ideal and would likely get the win.

    Read the article

  • Apache2, Tomcat6, and proxy redirects

    - by Randal Hale
    So here is my question - go easy and slow. I'm a GIS Consultant and general hack with linux. I inherited this volunteer job essentially because I knew more than the rest of the team - or the rest of the team isn't as stubborn as I am... With that said a number of people have been mucking around in the server before I got involved so I've been cleaning up a lot of things. The domain names have been changed to protect the innocent. I have a server running Apache2 (port 80) and tomcat6 (8080) running on ubuntu server 10.4. There is a virtual host on Apache2 called "Runner" (the domain is runner.org). I have mod_proxy loaded. I am trying to redirect everyone that visits runner.org to http://some.ip.address:8080/openrunner-webapp/ So far I've gotten runner.org assigned to the apache2 server. Someone set up a redirect in the httpd.conf file but I believe it needs to go into the virtualhost. I tried setting the redirect in the virtualhost as: *ProxyPass / http://localhost:8080/openrunner-webapp All that does is show me the root of the Apache webserver. Anyway I'm stuck

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • How to get an ARM CPU clock speed in Linux?

    - by MiKy
    I have an ARM-based embedded machine based on S3C2416 board. According to the specifications I have available there should be a 533 MHz ARM9 (ARM926EJ-S according to /proc/cpuinfo), however the software running on it "feels" slow, compared to the same software on my Android phone with a 528MHz ARM CPU. /proc/cpuinfo tells me that BogoMIPS is 266.24. I know that I should not trust BogoMIPS regarding performance ("Bogo" = bogus), however I would like to get a measurement on the actual CPU speed. On x86, I could use the rdtsc instruction to get the time stamp counter, wait a second (sleep(1)), read the counter again to get an approximation on the CPU speed, and according to my experience, this value was close enough to the real CPU speed. How can I find the actual CPU speed of given ARM processor? Update I found this simple Pi calculator, which I compiled both for my Android phone and the ARM board. The results are as follows: S3C2416 # cat /proc/cpuinfo Processor : ARM926EJ-S rev 5 (v5l) BogoMIPS : 266.24 Features : swp half fastmult edsp java ... #./pi_arm 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 8.50 sec. (real time) Android # cat /proc/cpuinfo Processor : ARMv6-compatible processor rev 2 (v6l) BogoMIPS : 527.56 Features : swp half thumb fastmult edsp java # ./pi_android 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 5.95 sec. (real time) So it seems that the ARM926EJ-S is slower than my Android phone, but not twice slower as I would expect by the BogoMIPS figures. I am still unsure about the clock speed of the ARM9 CPU.

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • MySQL 5.6 won't start on OS X - ambiguous option

    - by MaticPetek
    I would like to try MySQL 5.6 on my machine, but I cannot start it. I always get an error : [ERROR] /usr/local/mysql-5.6.5-m8-osx10.6-x86/bin/mysqld: ambiguous option '--log=/var/log/mysqld.log' (log-bin, log_slave_updates) my.cnf: [mysqld]<br/> pid-file=/usr/local/mysql-5.6.5-m8-osx10.6-x86/mysql.pid<br/> log-error=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-error.log<br/> log-slow-queries=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-slowquery.log<br/> log-bin=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-bin.log<br/> general_log_file=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-general_log_file.log<br/> log=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql.log<br/> I try to set "log" and "log-bin" paramether in my.cnf file and also as start parameters for mysqld, but with no luck. Any idea what I can do? Thank you. My environment: OS X 10.6.8 mysql-5.6.5-m8-osx10.6-x86 (not _x64 version) Note: I'm also running Mysql 5.5 on this machine (different port and socket). I also try to stop this instance but I get the some error.

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • IIS High use & Server Performance issues

    - by HaydnWVN
    Have an SBS2011 running Exchange, a database app and a few other things serving 5 users (3 low use, 1 high). The server was never specced for the database app so it isn't as powerful as I'd like... Only 12GB RAM. We have increasingly found performance problems with this server, last week it was so bad I couldn't even connect remotely. To free up some available RAM I have (over the past month or so): Restricted the Exchange Message Store to 1GB with (so far) no ill effects. Restricted SQL Databases (including SBSMonitoring and Sharepoint/##SSEE (Which isn't used)). Now I am finding that IIS Worker threads are using up the available memory and I have (so far) been unable to track down much useful information about restricting them. This server is not 'serving' anything web-based apart from OWA that I am finding people using because Outlook is so slow (again related to the Servers performance). I am aware that Exchange on SBS2011 is designed to use up available resources (and concede when other applications request). But it is not doing so (or anywhere near fast enough) for our needs. Opening the database application (using Postgres) takes 5+ minutes from client machines and regularly times out or crashes due to this. After a reboot (before SQL/Exchange/IIS databases are very large/totally cached) we get the performace we need and expect. Previously a reboot once a month was enough... Then once a week... Now they have taken to rebooting it almost daily!

    Read the article

  • Windows XP Installation problems

    - by Samurai Waffle
    I'm having trouble installing Windows XP on a computer... My friend gave me her old computer, it was riddled with viruses and ran extremely slow. I did my best to clean it out, and after a bit I discovered it had a boot sector virus. So I downloaded the Ultimate Boot CD (installed it on a flash drive), and ran Darik's nuke and boot to completely wipe the hard drive. I then tried to reinstall Windows XP from a USB drive... It doesn't work. The computer just stalls and never boots. The computers dvd drive doesn't work, so I borrowed a spare drive that another friend had, and tried to run a Windows XP cd. For a bit I got the stop 7B error, but now it just stalls like the USB drive does. Since then I've booted back into the Ultimate Boot CD, and ran partition magic. Repartitioned the Hard Drive, and copied the files on the Windows cd to the hard drive. I was wondering if there is any way I can make it run the setup.exe off the hard drive. I have the UBCD at my disposal, but have yet to come up with a way to do it. Any help is greatly appreciated.

    Read the article

  • Opera 10.5 RAM usage and Google Reader?

    - by David
    Hi all, Today I upgraded to Opera 10.5 from Google Chrome and I have two really important questions about it. 1) Is it normal for it to use SO MUCH RAM!!!!? Closing tabs doesn't help, but opening new ones add on to the usage. I can have just 4 tabs open and it goes up to the 300MB mark and I only have 1.5GB in my laptop, 596MB of it used by the graphics card so this really unacceptable. Is there a way to fix it? 2) Why does Google Reader feel so slow and unresponsive on it? It lags so bad when I just try scrolling through the page. I know Opera is known for being really smooth while scrolling through pages. There's also a white bar at the bottom of the page that I can get rid of. It blocks the "Next" and "Previous" buttons. The test between articles is also sort of intersecting each other and that just looks completely unattractive and that's something i'm not used with any web browser. I realize there's a built-in RSS reader, but it doesn't sync across multiple computers and is very late at updating. Here are my specs: Windows 7 Ultimate (x86), Intel Pentium M 1.86 GHz, 1.5GB RAM, ATI Mobility Radeon X600 (64MB dedicated, 596MB shared)

    Read the article

  • What LTO 4 drive to buy

    - by pplrppl
    Evan Anderson mentioned in another solution you could buy a LTO-4 (autoloader, 1 tape / day) - $4,566.00 (the discussion included total cost of tapes for a specific rotation.) but I don't know specifics on what he or you would recommend for the actual drive and if necessary controller. Show me a newegg URL or CDW, Dell, or HP, or whatever your favorite vendor would be for your solution if you don't mind looking it up or just give me a brand and a model number and I'll be glad to do the leg work myself. I currently have on have on hand an external LTO 3 drive that uses LVD SCSI interface (and thus have a controller card that has an external LVD SCSI connector). If that card isn't sufficient to interface to a LTO 4 drive let me know. http://www.fujifilmusa.com/shared/bin/LTO_Overview.pdf shows minimum tape speeds for LTO4 and other LTO formats. It looks like the IBM LTO4 actually has a lower minimum speed than the IBM LTO3. Either way my average server is too slow to feed LTO3/4 without shoeshining so I'm looking for a drive with a low minimum write speed. If you trust the PDF from 2008 that makes my choices IBM LTO 4 full height IBM LTO 4 half height HP LTO 4 half height but presumably there are other options out there that weren't mentioned in the fuji PDF. Again I'm looking for a specific recommendation on a drive to buy (and the controller if needed).

    Read the article

  • cannot log into mysql locally

    - by Lostsoul
    When I try to log into mysql locally using the command: mysql -u root -p I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) I can access the server remotely(not as root) and my web pages are using the mysql fine, but locally I cannot log on(which I need because I need to create some users). Only change I made was to attach another drive to the server and move the sql data there. Here's my.cnf [mysqld] datadir=/media/ephemeral0/data/mysql socket=/media/ephemeral0/data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # adding more config skip-external-locking long_query_time=1 slow_query_log slow_query_log_file=/var/log/log-slow-queries.log log-bin=mysql-bin server-id= 1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid myisam_recover_options I read I need to edit the socket info in my.cnf to make sure it points to the right socket file..I double checked and the file exists(although it starts with an S when I do ls -l "srwxrwxrwx 1 mysql mysql 0 Jun 21 03:43 mysql.sock"). I'm not really sure how to resolve this. I have tried to reboot and ran yum update to make sure I was running the latest packages. Please help!

    Read the article

  • How to "swap in" again memory from page file to physical memory in Windows at once (like linux swap-off)

    - by Arnout
    Is there a way to swap back in (to put back all the memory data that was put into the page file (or swap, whatever you prefer)) memory on a windows PC? On linux, one can easily do this with the swapoff /dev/sdaX, where X is the swap partition. On windows, it seems to ask me to reboot each time.. The reason I'd like to do this, is that, even though swapping out the data to the swap file allows me to play a resource-hungry game fully in physical ram, when I stop the game, all the rest of my programs run slow. This is or course normal; all the programs were pushed into the page file because my RAM was too small, and all memory access to those programs after gaming bumps into hard page faults, with major delays and some frustration as a consequence. However, that frustration could easily be avoided, by simply allowing the PC to copy all data back into the physical memory for a minute or so, and then resume working on a fast working PC! (rather than having to endure the slowness -while- working) Thanks in advance for any advice on this! Kind regards

    Read the article

  • Group Policy installation failed error 1274

    - by David Thomas Garcia
    I'm trying to deploy an MSI via the Group Policy in Active Directory. But these are the errors I'm getting in the System event log after logging in: The assignment of application XStandard from policy install failed. The error was : %%1274 The removal of the assignment of application XStandard from policy install failed. The error was : %%2 Failed to apply changes to software installation settings. The installation of software deployed through Group Policy for this user has been delayed until the next logon because the changes must be applied before the user logon. The error was : %%1274 The Group Policy Client Side Extension Software Installation was unable to apply one or more settings because the changes must be processed before system startup or user logon. The system will wait for Group Policy processing to finish completely before the next startup or logon for this user, and this may result in slow startup and boot performance. When I reboot and log in again I simply get the same messages about needing to perform the update before the next logon. I'm on a Windows Vista 32-bit laptop. I'm rather new to deploying via group policy so what other information would be helpful in determining the issue? I tried a different MSI with the same results. I'm able to install the MSI using the command line and msiexec when logged into the computer, so I know the MSI is working ok at least.

    Read the article

  • Embedding a WMV file on the web via URL in a Powerpoint presentation

    - by Dave
    I've got a situation where I want to distribute a Powerpoint presentation to several people. I want to be able to embed several large videos in this presentation by linking to a URL, for the following specific reasons: the videos are highly confidential, and I would like to be able to delete them at some later date, but still allow them to see it in the presentation while it is online. I want to send the presentation via email (so it should be small), and put the links on a server with a faster upload speed Maybe I'd like to change the video at some point without changing the presentation One option that addresses #1 is to hook up a webcam and allow them to see video stream from the office, but our upload rate is too slow for this to be a viable option. I've tried embedding a video and giving Powerpoint the URL. It seems to work initially, because the first frame appears in my slideshow. However, when I play the slideshow, nothing happens. I looked at the network traffic on my computer, and nothing was getting downloaded from the remote server. Any suggestions on how to make this work, or how to at least satisfy the criteria listed above would be great!

    Read the article

  • osx bash grep - finding search terms in a large file with one single line

    - by unsynchronized
    Is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term, even if there is only one "line" in a very large text file? Ok, this should be easy. Famous last words. I'm not that familiar with grep, but it seems it is mainly used to filter out lines in the input that contain search terms. I have a very large json file that I downloaded that i want to search for a particular term. before you click the link - it's over 244MB so be warned - it is from the internet wayback machine and contains lists of zip files of archived photos. i am trying to find mine. Their web interface is broken, so i found the json file that they make public here - it's the last one on the list. when i grep looking for my username, it finds it, but proceeds to dump that line to the console. the problem is that line is 244MB long, and it's the only line in the file. i tried using less, but could not get that to do much - it's very slow, and seems to have the same issue. is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >