Search Results

Search found 15376 results on 616 pages for 'once'.

Page 428/616 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Windows takes a very long time to shut down even in safe mode

    - by user1526247
    On Windows 7 the computer freezes for about 5 minutes once it gets to "Shutting down...". I can't remember when it started happening. I just lived with it for a while. The first thing I tried was a full scan using Microsoft Security Essentials. This did not solve the problem. I then went into msconfig and turned off everything I could get away with in the startup and services tabs. This did not solve the problem. I then uninstalled every program on this computer save the most basic programs. This did not solve the problem (did not uninstall drivers or catalyst). I then went through and turned off every single service and did a reboot. This did not solve the problem. I then booted into safe mode and just tried shutting it down. The problem even happens in safe mode. I have tried examining the event logs but with no success. They just say things like "blah blah has entered the stopped state" with no real clues about what program is causing me all this grief. *it may be worth noting that Ubuntu is installed on the same computer and the ubuntu boot loader is the one being used.

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Silent install of Japanese Language Pack in Win7

    - by Doltknuckle
    Every year, due to re-imaging, I am forced to find a way to install the Japanese language pack on a collection of 30 computers. Each year I look for a way to automate this process, and each year I am forced to do this manually. Maybe this year will be different. Has anyone had any luck with installing and configuring far east language support for windows 7 without user interaction? I have already downloaded kb972813 and have a way to get it out to the computers. What I normally do is this: Run the EXE, use the default settings. Open up language settings and create the JP keyboard. Configure the language bar settings. Copy settings to default user. Delete the local user cache. Sign the different user accounts in to make sure that the default settings are correct. This whole process takes about 10 minutes, multiply that out by 30 machines and you are looking at a 5 hour process. If I can log into all of the computers at once, I can normally cut that down to about an hour. Any ideas would be appreciated. Thanks in advance

    Read the article

  • User unable to delete folder / files "File in use by another user" Server 2003

    - by Az
    I am administering a standalone Windows 2003 Terminal Server with no domain membership. Occasionally (about once a week or so) a user will attempt to delete a sub-folder in a Shared folder and gets denied with "File in use by another user". I tried checking the shared folder snap-in and that folder is not open. She has full control and is the owner of the folder as well. I even checked in "Effective permissions" for some of the folders / files she cant delete and she truly has full control. I am able to delete the folder as Administrator with no problem. Another odd thing, she can delete the files IN the folder most of the time (this issue happens on both folders and files in the share). Sometimes merely waiting a day or two will allow her to delete the folder or files. I am curious as to why she gets the message that it is in use as creator/owner with full control yet I don't get it simply as a member of the Admin group. If anyone out there has any ideas I'd love to hear them! THANK YOU.

    Read the article

  • Executing a git command using remote powershell results in a NativeCommmandError

    - by user204777
    I am getting an error while executing a remote PowerShell script. From my local machine I am running a PowerShell script that uses Invoke-Command to cd into a directory on a remote Amazon Windows Server instance, and a subsequent Invoke-Command to execute script that lives on that server instance. The script on the server is trying to git clone a repository from GitHub. I can successfully do things in the server script like "ls" or even "git --version". However git clone, git pull, etc. result in the following error: Cloning into 'MyRepo'... + CategoryInfo : NotSpecified: (Cloning into 'MyRepo'...:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError This is my first time using PowerShell or a Windows Server. Can anyone provide some direction on this problem. The client script: $s = new-pssession -computername $server -credential $user invoke-command -session $s -scriptblock { cd C:\Repos; ls } invoke-command -session $s -scriptblock { param ($repo, $branch) & '.\clone.ps1' -repository $repo -branch $branch} -ArgumentList $repository, $branch exit-pssession The server script: param([string]$repository = "repository", [string]$branch = "branch") git --version start-process -FilePath git -ArgumentList ("clone", "-b $branch https://github.com/MyGithub/$repository.git") -Wait I've changed the server script to use start process and it is no longer throwing the exception. It creates the new repository directory and the .git directory but doesn't write any of the files from the github repository. This smells like a permissions issue. Once again invoking the script manually (remote desktop into the amazon box and execute it from powershell) works like a charm.

    Read the article

  • How to fix "The connection was restarted"?

    - by Altar
    I have a php script with ajax. Few days ago, without making any changes, the script stopped working. The script is working sometimes but it is very slow. Other times it doesn't load completely. Yet other times it loads and empty page or it shows a "The connection was restarted" message. We have tried loading the page from 4 different computers in two different countries (two different ISPs). There is nothing relevant in the server's log. We contacted iPage.com hosting support. We sent screenshots. And all we manage to get from them was an incomplete screenshot and the message 'We tested. The page loads'. So, they won't admit the fault. It looks like they loaded the page once and declared it is working and went back to playing solitaire. I know their support. We had all kind of problems with iPage hosting. My questions are: 1. There is any way to make this error more easy to reproduce? I mean instead of fixing the error to get it appear more often. 2. There is any way to test a server response, to see if it drops connections?

    Read the article

  • Data loss by randomly unplugging the computer during runtime

    - by Kan
    I'm from Austria and we and the Germans have some sort of bad science-show which runs every day. What I call it would rougly translate to "half-knowledge" if you want so. By the way: It is called "Galileo". So they thought they'd make a computer myth busters video right now, and I couldn't believe what I saw and heard... The strangest thing to me was that they asked: "Does unplugging the computer damage your data?" Then they started up some machine with Vista on it, started copying some files and randomly unplugged the PC cable, the whole thing around 50 times. After their computer continued to start up normally, they just said "nothing can happen, your data or computer can't be damaged". They of course excluded unsaved data in running programs like text editors from this. I asked myself: What the hell are their "computer experts" saying? You can't tell by unplugging the cable 50 times if that can damage your computer. Can unplugging the cable during runtime cause data loss (as said by the moderator of the show)? (I destroyed my windows registry once during a reset)

    Read the article

  • Can't get 1440x900 resolution with GRUB2 although vbeinfo says it's available

    - by TomSW
    I'm trying to use GRUB2 in graphical mode with 1440x900 resolution, but the result is always garbled nonsense: the highest resolution I can get is 1280x800. Word is from googling that long as vbeinfo lists a resolution, GRUB2 can use it. This doesn't seem to be true: vbeinfo says that 1440x900 is available but it doesn't work. Testing it from the GRUB2 command line: set gxfmode=1440x900 terminal_output gfxterm # -> garbled nonsense # back to trusty 640x480 terminal_output console The graphics card is an Intel GM965. Once linux boots the framebuffer switches to 1440x900. Added after epheminent's reply and various experiments vbeinfo lists two sets of modes. The first set runs from 0x160 to 0x16b, with resolutions 768x480, 960x600, 1280x800 and 1440x900 Then - after a bunch of text-only modes - the second set, containing resolutions 1024x768, 800x600, and 640x480 The first set of modes aren't altered by 915resolution. They all work except 1440x900. The resolution of modes in the second set can be altered using the 915resolution module / command available in GRUB2 = 1.99. # in /boot/grub/grub.cfg insmod 915resolution # 30, 32, 34 all work for me: all that varies is which modes are altered 915resolution 30 1440 900 # setting an impossible resolution changes the mode to "text-only" # in my case 1280x1024 is not supported 915resolution 30 1280 1024 Clearly, 1440x900 should just work: adding it with 915resolution is just a workaround.

    Read the article

  • Wifi antenna extension with F-connector/RG-6(RG-59) cable?

    - by rjz2000
    In an older house, the wire mesh in walls surrounding the furnace behave like a Faraday cage and block wifi signals. It is also difficult to lay new cable, however there is television cable to multiple locations due to there once having been a roof-installed, television antenna. It would be relatively trivial to install the wifi router at the center distribution point, then have the antenna broadcasting/receiving the signal plugged in at each of the old television outlets. I assume that it would not be too difficult to find an adapter for SMA <- F-type connectors. The cable is actually RG-59 rather than RG-6, but I assume that it still has relatively good RF isolation along its length, which is no more than a couple hundred feet in any direction. Does anyone know a problem with the idea? Will a router get confused if there is /too little/ interference between the two antenna? Is that length of cable (~100ft) too long for the signal a router broadcasts? I have seen that it is also possible to use old ~$30/each FiOS cable modems available on eBay to extend a network over television cable. However, that seems like a less elegant solution, and might interfere with upnp and dlna services I'd like to have work on a single network. Thanks if anyone has answers or suggestions before I try this project!

    Read the article

  • Asterisk relay between multiple subnets

    - by immoune
    I wonder what's the best way to go when you have phones on multiple networks which are not directly reachable. I have 3 networks 10.3.x.x 10.6.x.x 10.17.x.x My asterisk server resides on the 10.3.0.5 IP. The machines from the 10.6 and 10.17 networks are routed here through VPN tunnels. At this point we don't talk about NAT anywhere on the network just pure routing. Since the 10.3.0.5 PBX has routes back to all the subnet's it has no problem to communicate with softphones/hardphones from these ranges. The problem comes from that Asterisk (as far as I understand) only responsible for the SIP communication part not the Audio/Video transmission which is in P2P fashion done between the devices. So although a client using sipdroid from 10.6.x.x is able to connect to the pbx (10.3.0.5) and dial a bria client on the 10.17.x.x network once the phone rings out and the call establishes no audio will be transmitted simply because it has no way to directly connect there. For this there are multiple solutions described in this text: http://msdn.microsoft.com/en-us/library/ee480411%28v=winembedded.60%29.aspx What I would prefer is to keep these networks segregated as they are now. What would be the best solution? Is it possible to actually relay through all the audio/video information through the Asterisk server? That would be the best in my case, I using Astlinux there which has a lot of other parts. Thanks

    Read the article

  • Backing up 80G hard drive 1G per day

    - by barrycarter
    I want to securely backup my 80G HD, but doing a complete backup takes forever and slows down my machine, so I want to backup just 1G per day. Details: % First hurdle: on the first day, I want to backup the "first" 1G of the hard drive. Of course, there really is no "first" 1G on a hard drive. % After 80 days, I'll have my whole HD backed up... assuming none of my files ever change, which of course they do. So the backup plan/program must also catch file creation/changes as they come along. % The backups must be consistent, in that I can restore my system by restoring the backups sequentially. In other words, "dd if=/harddrive" probably won't work. % The backups should encrypt file contents AND names, but I don't see this as a major hurdle. % Once the backup has backed up everything (even changed files), it can re-backup the first 1G on my hard drive. Even though this backup is redundant, that's OK, because I always want to be backing up something (eg, if I'm backing up to optical media, the older media might start going corrupt). Is there a magic backup plan/program that does this? In reality, I want to do this for multiple machines with multiple drives each, but think that solving the above will solve the general case.

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • Install Peppermint OS three on Asus EeePC

    - by Kithoth
    I just had a new Asus EeePC R051CX. Out of the box, the installed OS is Ubuntu 12.04 LTS, but I am trying to install Peppermint OS three (as single boot). Problem. Once on live CD (well, live USB stick...), I'm in trouble in both following situations: Try Peppermint OS Live In this case, the first thing I get is a message reading The system is running in low-graphics mode Your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself. I can solely press "return" to accept, then I have a list of 4 options to answer the question "What would you like to do?". But I simply can't do anything at this moment, except switching to console mode or rebooting (keyboard / mouse controls don't allow me to do anything else). Install Peppermint OS Something I really don't understand... it launches the Ubuntu Recovery Media (which was already installed when I received the device)! Also, it says in the bottom ERROR: This recovery media only functions on Ubuntu systems. All I can do is quit (that is, reboot). One last important thing that comes to my mind: this stick worked just fine on the other computers I've tried it on. I really hope someone could bring me the light, a friend of mine told me how cool this OS is for EeePCs. Don't want to give up! Thanks. Edit I finally could install Peppermint, but not by understanding why I couldn't do it the logical way. Instead, I reinstalled Ubuntu myself (erasing the factory one). Then, I could simply boot on my live USB and perform a fresh install of Peppermint. So, I still don't know how and why the mentioned problem occurred.

    Read the article

  • System Issues and Major Malfuctions after Failed hibernation Exit

    - by Sarah Seguin
    I have a HP G71-340US that went into hibernation mode for a while and when I tried coming out of it, I got an error message: You're computer cannot come out if hibernation . Status: 0xc000009a Info: A fatal error occurred processing the restoration data. File: \hiberfil.sys Any information that was not saved before the computer went into hybernation will be lost enter=continue So I hit continue and it ran soooo super slow it. It was seriously crawling. Finally I gave up and turned it off manually (IE press and hold the button). It's been a week or two since then and EVERY SINGLE TIME I have tried to to do ANYTHING it takes forever. When I say forever, I literally mean takes 5-7 minutes to load the internet, then the page itself, then to click a link, so on so forth. Eventually everything just goes not responding and I have to give up (4-6 HOURS later). I also cannot access my thumb/jump drives once I've managed to load windows. I was going to try runing malware bytes incase of a virus, but it's windows explorer developes errors and goes not responding on me. Currently I'm running scan disk or check disk and like every file is coming back unreadable. I let it run the last 2 hours straight in chkdesk and I'm only at 6 percent with around 500+ errors and still going. Yes, I've taken logs of the errors via cell phone camera and patience. A week or two prior to this happening I had to change our the hard drive due to blunt force trama next to the mouse. OH! Running on Windows 7: ) And I've tried loading the computer in safe mode and it makes absolutely no difference. Any and all help would be appreciated. I really don't know what to do from here and I'm kind of freaking out. I've googled different part of the error and things that I've done/seen and there are so many different answers/topics that I thought it best to just post the questions.

    Read the article

  • Nvidia 9800GT randomly prevents computer from booting

    - by Blender
    My computer has been running Windows and Linux perfectly fine with my 9800GT for the past year or so, but today it refused to boot. When I press the power button, this is what happens: Power button flashes once. Fans whir. Graphics card makes clicking noise. Computer reboots. Go back to 1. The cycle just keeps going, and I have to yank the cord to make the computer stop. After about 30 attempts at booting it, the computer powers on and everything works. I'm pretty sure that the graphics card isn't malfunctioning, as I've been GPU computing on it for a while now without any hiccups. But the strange thing is, the computer boots perfectly fine in only 5 boots if I remove the card. The computer is a HP Pavilion a6028x Desktop PC: Processor: AMD Athlon 64 X2 (W) 4600+ 2.4 GHz (AM2 socket) Motherboard: ECS MCP61PM-HM (Nettle 1) RAM: 3GB DDR2 (two different brands) More specs here Does anybody know what could be the problem? Any help or information would be greatly appreciated!

    Read the article

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • Domain joining debate for Outlook 2010 with Exchange 2007 on windows SBS 2008 for a user on a laptop that will travel a fair amount of the time.

    - by user71195
    I'm basically debating on whether or not to join the Domain on a Laptop, and was wondering if anyone has had a similar experience. If the computer were staying in the office, its a no brainer. Join the domain. In this case I have a user who will come into the office a few days a week, and work remotely the rest of the time. There is a working VPN using OpenVPN client/server, but it's not site-to-site. My knee jerk reaction is to not join the domain, so that the user can have 1 profile that they always use. In this configuration, should Outlook work properly with the user's domain account, and should the shared calendar still work (at least once inside the VPN)? My concern with joining the domain would be the inability to login to it when elsewhere. Is there maybe a way around this with caching or something? Would creating a second local login make sense for a user like this in any way? If so, why not just skip the domain join to begin with? Any thoughts on or experiences with this would be appreciated. Laptop OS Windows 7 (Not purchased yet.. pro if domain needed) Server SBS 2008, Exchange 2007 Outlook version 2010 Thanks for any help, Mike

    Read the article

  • Sharepoint web part fails intermittently

    - by pringly
    I have a MOSS 2007 environment, 2 web servers and a DB server, load balanced between the two web servers. I deployed a web part recently, which worked fine for a while, but failed on web server 2 after a day. When it fails, it gets the error message: 'A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe’ Once it has failed, it will stay that way until an IIS reset is done. The other web server never fails, I tried to force the second web server to fail to recreate the issue and have been unable to do it. I tried placing it under heavy http traffic and it handled it fine. Put it back in the pool and it failed again after about 7 hours. So, if i remove the .dll for the webpart from the affected web server, the webpart doesnt stop working. Is this normal behavior? I checked the bin directory for the site and the global assembly and it there is no other copy of the .dll anywhere else on the server. Also, when checking the web part gallery, if the web part has failed it will appear in the gallery, but by trying to add a new webpart, the .dll wont be listed. I have no idea how to continue troubleshooting from here or even fix it, any ideas?

    Read the article

  • Random HTTP 413 error on apach2/php/joomla site

    - by jfab
    I have a Joomla site, and every once in a while when I submit something via a form, I get a HTTP 413 error: Request Entity Too Large The requested resource /index.php does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. In the error.log file I get: Invalid Content-Length, referer: [site]/index.php It doesn't seem this has anything to do with the actual size of the request, for the following reasons: a) I tinkered with the configuration of both Apache, and PHP. In Apache I tried increasing LimitRequestBody, and in PHP post_max_size, max_input_vars, memory_limit, and even upload_max_filesize. Every value is far beyond what is sent in a typical request that generates an error. b) The error pops up quite randomly, and often just hitting refresh allows me to get through. c) I checked the request in Fiddler to make sure everything is right with the content-length stated in the header, and the content of the request itself. Everything appears to be in order. A curious thing is that when I resent the exact same request via Fiddler, I never got the error. It seems I can only recreate it through a browser. So I'm at my wit's end here. I don't even know where to look for the problem anymore. I don't know if it's Apache or PHP (though I can't find anything in PHP error logs, so maybe that means Apache is the more likely culprit?), or PHP in general, or my Joomla site in particular (my bets were on Joomla until a recreated the error on a test script, with a very basic post form, though it does pop up much more often on the Joomla site). If anyone can give any advice on where to even begin with this, I'll be very grateful!

    Read the article

  • Understanding MySQL Query Caches and when to implement it?

    - by Jeff
    On our current MySQL server query cache is enabled. Qchache_hits: 31913 Qchache_inserts: 50959 Qchache_lowmem_prunes: 9320 Qchache_not_chached: 209320 Qchache_queries_in_chace: 986 com_update: 0 com_delete: 0 I do not fully understand the Query cache - I am reading about it currently and trying to understand it. Our database holds inventory data, customer data, employee data, sales data and so forth. The query is very rarely run more than once. The possibility of a query being run twice is viewing a specific sales information twice. But basically everything in our system changes constantly. It is always being updated, deleted, insterted and off the top of my head I can't picture users running the same query twice within a week. Do I even need to have the query cache enabled? I am guessing that the inserts means 51k entries have been added, but only 986 of those are being stored? Would an idea be to refresh the cache, and watch it for a week and check how many of the queries in cached are accessed maybe on a weekly basis to see if it is actually returning any benefits? Any help/guidance on this is appreciated, thanks

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Auto-restart mysql when it dies

    - by Los Frijoles
    I have a rackspace server that I have been renting to run my personal projects upon. Since I am cheap, it has 256Mb of RAM and honestly can't handle alot. Every once in a while, when there is a sharp uptick in traffic, the server decides to start killing processes and it seems that mysqld is a popular one for it to kill. I try to visit my site and am greeted with the message that there was an error establishing the database connection. Inspection of the logs reveals that mysqld was killed due to lack of memory. Since I am still as poor as I was yesterday and don't want to upgrade my rackspace VM's RAM, is there a way I can tell it to automagically restart mysqld when it dies? I have a thought to use something like crontab, but alas, I don't know exactly what to do there either. I guess I am product of the "Linux on your desktop" generation since I can do most things on my desktop and laptop (which run Linux almost exclusively), but still lack a lot of server administration skills for Linux. The server runs CentOS 6.3

    Read the article

  • Where can someone store >100GB of pictures online? [closed]

    - by sbi
    A person who is not very computer-savvy needs to store 130GB of photos. The key parameters are: an non-negligible probability that the company selling the storage will be existing, and the data accessible, for at least five years data should be considered safe once uploaded reasonable terms of service: google drive reserving the right to literally do anything they want with their user's data is not acceptable; the possibility that the CIA might look at those pictures is not considered a threat easy to use from Windows, preferably as a drive no nerve-wracking limitations ("cannot upload 10GB/day" or "files 500MB" etc.) that serve no purpose other than pushing the user to the next-higher price plan some upgrade plan: there's currently 10-30GB of new photos per year, with a tendency to increase, which might bust a 150GB limit next January ability to somehow sort the pictures: currently they are sorted into folders, but something alike (tags) would be just as good, if easy enough to apply of course, the pricing is important (although there's a reason this is the last bullet; reasonable data safety is considered more important) Nice to have, but not necessary features would be: additional features related to photos (thumbnail generation, album sharing etc.) access from web and other platforms than Windows (smart phones) Let me stress this again: The person in need of that is able to copy pictures from the camera to the computer, can copy files in the explorer, and uses a web email service. That's about it, there's almost no understanding of what happens under the hood.

    Read the article

  • DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder?

    - by Rob
    Is it more reliable, faster, longer lasting to burn to CD/DVD a zip (or a few large zips) of files rather than the files as a folder? Just thinking if 1000s of small files would not be as efficiently recorded compared with one or a few large zips. Also, even after the burning program verifies the disc, I also use Beyond Compare to compare the files with those on the disc. Always binary compares as identical but I hear the drive stuttering presumably as the head is being shifted only slightly each time to seek the next file, which leads me to think that its best to make one or more zips and copy those locally to compare. Or is it that burning invidual files to the disc is not as readable which causes the head to stutter. There aren't any problems, my disc burns are reliable, just thinking more of efficiency and longevity, the discs burn and verify fast enough on my 18x DVD burner. I'm using ImgBurn mostly. Also used Nero in the past. I burn whole discs closed, finalised. Not sure which write mode but would think Disc At Once from a temporary cached image made by the burning program would be the most reliable.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >