Search Results

Search found 3324 results on 133 pages for 'gb j'.

Page 90/133 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How much data does windows write on boot

    - by soandos
    This question was inspired by Bob's comment to my answer here. On boot, windows writes files to the hard drive (I imagine this to be the case, as it has a way of detecting if the boot was previously interrupted by a hard power-off, and I am sure many other things). But assuming that there is a "smooth" boot, where there are no error, etc, and no logon scripts that run, and things like that, about how much (a few KB, a few MB, a few GB) data gets written to the drive? For simplicity's sake, assume that: hibernation is turned off windows 7 pagefile is turned off (does this matter right at boot, or only later?) How could one go about measuring this? Are there resources that have this information?

    Read the article

  • Splitting an archive on multiple media

    - by Robert Munteanu
    I'm generating archives which are larger than my current physical media ( DVD ). I'd like to split those archives: automatically - instead of generating mini-archives by hand; consistently - so that an archive can be extracted independently of another. For instance for a tree of 24GB which would be archived into 10GB I would get 3 archives, all of them < 4.7 GB and each of them being able to be extracted without the other 2. I'm using dirvish so I'm archiving a filesystem tree. Update: I'm using Linux.

    Read the article

  • Move /var directories to to /mnt on an EC2 instance

    - by Geoff Lanotte
    I am trying to work on a standard configuration for a set of EC2 instances running ubuntu 12.04. These servers are going to be primarily web servers for a Ruby on Rails application. When you configure a new large instance, you are given a primary of 8GB and then ephemeral storage of 400 GB that is mounted to /mnt. It seems logical to me to move some directories that have a potential for growth off to the /mnt directory, I was specifically thinking of /var/www and /var/log. My question is two-fold: Is this a good idea or are there pitfalls that I cannot see? If this is a good idea, how should I go about configuring this. I do have the ability to configure new instances and down our old instances. My concern is over long term, doing this in such a way that it prevents downtime. I am a developer with some experience in devops, but mounting drives is something I have not faced before, so explicit directions would be greatly appreciated.

    Read the article

  • I recently converted my ex HDD from FAT32 to NTFS. now my pc doesn't find or pick up my NTFS HDD

    - by Jason Haniball
    I recently converted my external hard drive from FAT32 to NTFS using the Command Prompt. Everything was working fine, I copied a 7GB file to it and everything worked. The next day I switched on my PC, I couldn't and still can't find my external 1.5TB hard drive by my computer. I have about 500 to 800 GB of data on it that I really don't want to lose. Its a Iomega Seagate Freeagent HDD. Has no switch, it switched on automatically, don't know if that helps.

    Read the article

  • Can a processor upgrade cause BSOD?

    - by Daniel
    I Had the following hardware setup: Phenom II X4 945 Asus M4A97 4GB DDR2 OCZ Radeon HD5850 OCZ Agility 2 120 GB Windows 7 x64 Pro Fully updated (latest drivers and windows update patches) Then I bought an used Phenom II X6 1090T and installed it without formatting. Since that my computer started BSODing almost every time I'm playing any game and with different error messages, like: page fault in non-paged area bad pool header the video memory manager found a problem error in dxgmm1.sys(or something like that) And when it doesn't BSOD the game simply crashes. I have tried: Updating BIOS Reseting BIOS to defaults Reinstalling Video Drivers Installing the latest DirectX All that's left is to do a full format and I don't want to do that since it's going to be a lot of work to fine tune windows to my preferences again. So is my "new" processor defective or do I really need to format the computer? Update: I use a (properly installed) custom cooler from Coolermaster and both the BIOS and the Open Hardware Monitor(Application) attest the CPU is not overheating, so I guess the CPU its defective and since I bought it from a guy over the Internet I'm probably screwed

    Read the article

  • Server 2003 slow share.

    - by G V
    I am running an 03 box with shares active. When uploading to the share, the speed is average. About 15-20 mbps.. But when you think about it, it is bad because it is a direct connection to a couple machines. When uploading to another server the connection speed is twice that of the direct storage. When uploading s massive folder, 250 GB, the upload will start as normal, but as it progresses it drops in speed. Now it is sitting at around 2-7 Mbps. Any ideas on how i can boost the transfer rate? On a side note, the download speed is great. It is a speed that you would expect from this setup, the main problem is uploading and what is causing the extreme slowness in speeds. Any help would be great.

    Read the article

  • How to find all photos taken in April - any April?

    - by Mawg
    I have 100+ gB of photos going back 25 years. They are arranged in a directory tree by category, with nested sub-directories. How can I make a search for all photos taken in a given month, say April, in any of those directories? I don't think that a Windows search will work as that will probably be the file creation data, which could be a month or two later wen I finally more the files from SD card to PC. Perhaps searching the EXIF data? Is there a free program which can do that?

    Read the article

  • WHM Backup recommended?

    - by user77284
    I have a VPS (CentOS) with WHM, about 25 GB. It has about 20 accounts on it. I am looking to effectively back it up. My thoughts: Back it up with WHM Backup locally. Use Rsync to mirror it to another server. My questions: Is WHM Backup a good solution? How can I keep several backups while keeping a minimal amount of space? Is there a different solution, I should consider? I am not an expert, so I want something simple that works with minimal maintenance. Thanks.

    Read the article

  • How to extend a partition in Windows 2000 Server

    - by user999684
    I have a Windows 2000 Server set up with RAID 5. I initially defined 2 136 GB logical disks 0 and 1. I have a small utility partition on disk 0 along with the C drive. I wish to extend the C drive to use disk 1 as well, which is now configured to drive D. I deleted drive D, but it is still in disk 1. I download partdisk.exe from MS, but am not sure how to accomplish what I want to do. I know I need to use extend, but I think I need to remove disk 1 and somehow add the unallocated space to disk zero, but am not at all confident on how to do it.

    Read the article

  • TrueCrypt - "Warning! Password locked: Fixed disk0" error message on boot

    - by Tibi
    TrueCrypt - "Warning! Password locked: Fixed disk0" error message on boot. When i start my laptop (Acer TravelMate 2410). after the starting memory check, the screen goes full black, and a message appears for about 3 seconds: Warning! Password locked: Fixed disk0 and after that, disappears, and next message comes out: Operating System Not Found and all stops here. Windows Xp was installed on it, before this came. TrueCrypt cd (witch was made during the process of full encryption) is not working, not in restoring MBR, no even in decrypting my drive - completely useless. Note: I detected some short of boot sector errors (i dont know the amount) on my drive before this happened. Please, i would greatfully thank every comment, or suggestion, because my computer is unusable now. The HDD is a Samsung HDD, 160Gb. Other preferences: Acer TravelMate 2410 Notebook, 2 Gb RAM, 1500 Mhz Intel Celeron M processor. Regards

    Read the article

  • Is it a good idea to take onsite/offsite backups of server images?

    - by ServerAdminGuy45
    Assuming a non-virtualized environment it a good idea to take actual images of servers (using something like Acronis True Image) and store them on\off site? Backing up data is great but I feel it would be good to have copies of OS images in the event hardware dies or an upgrade gets botched I can always revert back. What would be your recommended way to do this (preferably using a NAS and an online backup service)? I was talking with the Iron Mountain folks and the service they described is more geared toward taking incremental snapshots of data. I'm not sure if there's a way to backup images in an incremental way such that only the changes between them are saved (that way I'm not wasting X GB each time I take an image).

    Read the article

  • How to move a partition to the end in gparted?

    - by matnagel
    I can't find a way to move the partition /dev/sdb2 to the end, where 12GB are free http://dl.dropbox.com/u/3358699/permanent/gparted-sdb.png I can resize (expand) the partition, but not create (insert) any free space in front of it. How to do the trick? (There are 2 small black arrows on the top of the popup window in the screenshot at the side of the blue box that represents the 400 GB sdb2 - I can only move the right arrow to the right, which extends the size, but I cannot move the left arrow. When I enter something in the free space preceding box it is always reset to zero by the programm immediateley) I hope I explained this well enough, please feel free to ask for details. This is serious for me as I am expanding a live image. Maybe there is another solution with linux commandline tools ?

    Read the article

  • Best (physical) DRM free MP3 players [closed]

    - by alex
    I'm looking to purchase an MP3 player soon. It should: Be compatible with Windows Media Player Hold at least 40 GB Be completely DRM free Be reliable and well built. I don't want to repeat my iRiver experience. Be small enough to be comfortably carried in my pocket. I don't care about looks, this can be the ugliest beast ever. Knowing this, what should I buy? [I figured this is almost on topic for Super User, if not: vote to close it.]

    Read the article

  • Most secure way to access my home Linux server while I am on the road? Specialized solution wanted

    - by Ace Paus
    I think many people may be in my situation. I travel on business with a laptop. And I need secure access to files from the office (which in my case is my home). The short version of my question: How can I make SSH/SFTP really secure when only one person needs to connect to the server from one laptop? In this situation, what special steps would make it almost impossible for anyone else to get online access to the server? A lot more details: I use Ubuntu Linux on both my laptop (KDE) and my home/office server. Connectivity is not a problem. I can tether to my phone's connection if needed. I need access to a large number of files (around 300 GB). I don't need all of them at once, but I don't know in advance which files I might need. These files contain confidential client info and personal info such as credit card numbers, so they must be secure. Given this, I don't want store all these files on Dropbox or Amazon AWS, or similar. I couldn't justify that cost anyway (Dropbox don't even publish prices for plans above 100 GB, and security is a concern). However, I am willing to spend some money on a proper solution. A VPN service, for example, might be part of the solution? Or other commercial services? I've heard about PogoPlug, but I don't know if there is a similar service that might address my security concerns? I could copy all my files to my laptop because it has the space. But then I have to sync between my home computer and my laptop and I found in the past that I'm not very good about doing this. And if my laptop is lost or stolen, my data would be on it. The laptop drive is an SSD and encryption solutions for SSD drives are not good. Therefore, it seems best to keep all my data on my Linux file server (which is safe at home). Is that a reasonable conclusion, or is anything connected to the Internet such a risk that I should just copy the data to the laptop (and maybe replace the SSD with an HDD, which reduces battery life and performance)? I view the risks of losing a laptop to be higher. I am not an obvious hacking target online. My home broadband is cable Internet, and it seems very reliable. So I want to know the best (reasonable) way to securely access my data (from my laptop) while on the road. I only need to access it from this one computer, although I may connect from either my phone's 3G/4G or via WiFi or some client's broadband, etc. So I won't know in advance which IP address I'll have. I am leaning toward a solution based on SSH and SFTP (or similar). SSH/SFTP would provided about all the functionality I anticipate needing. I would like to use SFTP and Dolphin to browse and download files. I'll use SSH and the terminal for anything else. My Linux file server is set up with OpenSSH. I think I have SSH relatively secured. I'm using Denyhosts too. But I want to go several steps further. I want to get the chances that anyone can get into my server as close to zero as possible while still allowing me to get access from the road. I'm not a sysadmin or programmer or real "superuser". I have to spend most of my time doing other things. I've heard about "port knocking" but I have never used it and I don't know how to implement it (although I'm willing to learn). I have already read a number of articles with titles such as: Top 20 OpenSSH Server Best Security Practices 20 Linux Server Hardening Security Tips Debian Linux Stop SSH User Hacking / Cracking Attacks with DenyHosts Software more... I have not implemented every single thing I've read about. I probably can't do that. But maybe there is something even better I can do in my situation because I only need access from a single laptop. I'm just one user. My server does not need to be accessible to the general public. Given all these facts, I'm hoping I can get some suggestions here that are within my capability to implement and that leverage these facts to create a great deal better security than general purpose suggestions in the articles above.

    Read the article

  • Enlarge partition on SD card

    - by chenwj
    I have followed Cloning an SD card onto a larger SD card to clone a 2G SD card to a 32G SD card, and the file system is ext4. However, on the 32G SD card I only can see 2G space available. Is there a way to maximize it out? Here is the output of fdisk: Command (m for help): p Disk /dev/sdb: 32.0 GB, 32026656768 bytes 64 heads, 32 sectors/track, 30543 cylinders, total 62552064 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e015a Device Boot Start End Blocks Id System /dev/sdb1 * 32 147455 73712 c W95 FAT32 (LBA) /dev/sdb2 147456 3994623 1923584 83 Linux I want to make /dev/sdb2 use up the remaining space. I try resize2fs /dev/sdb after dd, but get message below: $ sudo resize2fs /dev/sdb resize2fs 1.42 (29-Nov-2011) resize2fs: Bad magic number in super-block while trying to open /dev/sdb Couldn't find valid filesystem superblock. Any idea on what I am doing wrong? Thanks.

    Read the article

  • How to increase the volume gain when viewing online flash video?

    - by Nick000
    When watching online Flash videos on Youtube, Dailymotion, etc... sometimes the videos are recorded at low volume. The thing is that, I have a HP Notebook with good enough audio volume, but when I watch these "low volume" videos the sound level is really low, even when I have the volume at 100%. So I am looking for a way to increase the volume gain (like VLC player where you can increase it to 200%), BUT while watching it live on Youtube... that is, I don't want to download the video on my PC. Is there a software that can do that? Maybe an advanced flash video player that integrates to browser? or some other software to increase the volume gain overall on my laptop? My Specs: HP Pavilion Notebook, Audio: IDT High Definition Audio CODEC (integrated), Vista 64-bit, 4 GB RAM, etc.....

    Read the article

  • Mysql process goes over 100% of CPU usage

    - by Temnovit
    Hello! I'm experiencing some problems with my LAMP server. Recently, everything became very slow, even though visitor count on my websites didn't change to much. When I run top command, it sais that mysql process has taken over 150-200% of CPU. How's that possible, I always thought that 100% is a maximum? I'm running Ubuntu 9.04 server edition with 1,5 GB RAM my.cnf settings: key_buffer = 64M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 200 table_cache = 512 table_definition_cache = 512 thread_concurrency = 2 read_buffer_size = 1M sort_buffer_size = 4M join_buffer_size = 1M query_cache_limit = 1M # the maximum size of individual query results query_cache_size = 128M Here is the output of MySQLTuner: The top command: What could be the cause of this problem? Can I make changes to my my.cnf to prevent server from hanging?

    Read the article

  • How to use HFS formatted pen drive in Windows 7?

    - by row-sun
    I recently used disk utility in my mac book pro to format my 8 GB pen drive to install OS X. After that I formatted my pen drive from disk utility as FAT32 so that I would be able to use it in windows. But in windows the pen drive does not show up. When I right click on my computer and click manage and then disk management, the pen drive is listed there, but it doesn't show up in the explorer and I cant use it. I tried to do many things but I'm still not being able to use it in windows though I can use it in Mac OS X. Could anyone help? Thanks.

    Read the article

  • Is it safe to use up all memory on linux server, not leaving anything for the cache?

    - by Temnovit
    I have a CentOS server fully dedicated to MySQL 5.5 (with innodb tables mostly). Server has 32 GB RAM, SSD disks, and avarage memory usage looks like this: So about 25GB is in use and about 6.5GB is cached. I am experiencing performance problems with WRITE queries, so I was thinking, is this the optimal cache size? I might increase innodb buffer size, so that linux cache would become smaller, or decrease it, so it would be bigger. What is the optimal used/cached memory balance for busy MySQL server on linux?

    Read the article

  • Build Advise for Home Web/NAS Server with Ubuntu Server 12.04 [closed]

    - by razor7
    I need to have a personal Webserver with NAS capabilities. The Webserver to test some LAMP projects I develop for clients, and also NAS to be able to stream media to local network. I want to have full control of the box, so I'm planning to build it with some spare parts and Ubuntu Server. The services/software that will run are (remember, is for personal and testing use only): SAMBA/CIFS SSH Server Apache 2 MySQL 5 Mercurial Repo PHP 5.3 Ruby on Rails OwnCloud Dovecot Webmin Postfix PureFTPd ClamAV The Hardware: Intel Dual Core E2180 2.0 GHz MSI P35 Neo Kinkston 1GB DDR2, 667 MSI Nvidia 7300le PCIe x16 256mb RAM HDD SATA WD Green 2TB x2 (RAID-1 with MDADM RAID Controller) 16 GB USB Pendrive (For server system installation) My idea is to build this system, using the pendrive for the Ubuntu Server software, and packages, and the RAID-1 for gross data storage. What do you think? Thanks a lot!

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • Is there any way to automatically prevent running out of memory?

    - by NoahY
    I am often running out of memory on my VPS ubuntu server. I wish there was a way to simply restart apache2 when it starts running out of memory, as that seems to solve the problem. Or am I just too lazy to fix the problem? I do have limited memory on the server... Okay, more information: I'm running apache2 prefork, here are my memory settings (i've been tweaking them...): StartServers 3 MinSpareServers 1 MaxSpareServers 5 MaxClients 150 MaxRequestsPerChild 1000 The VPS has 1 GB of ram, running ubuntu 11.04 32-bit. As for scripts, I have a wordpress network with 5 blogs, an install of AskBot (a python/django stackexchange clone), and an install of MediaWiki that isn't really used. There is also a homebrewed mp3 script that accesses the getid3 library to display information on lists of podcasts, and it seems to be throwing some php errors, not sure if that's the culprit...

    Read the article

  • Data transfer is extrem slow after partitioning extern usb drive

    - by user125912
    I bought an extern usb 3.0 drive with 500 gb capacity. OS is Windows 7. I use it with an usb 2.0 slot, no prob. Initially I used it without making several partitions and it was fast as hell. Then I had the great idea to make partitions, one for programs, one for data and one for backup. I chose the free EASEUS Partition Master 9.1.1. and ended up with these partitions: F:Apps, primary, NTFS, 100gb H:Data, logic, NTFS, 250gb B:Backup, logic, NTFS, 150gb THE PROBLEM: When I copy files from C: to F: I get a transfer rate of about 100 KB/S ! When I copy files from C: to H: I get a transfer rate of about 4 MB/S ! thats all muuuch to slow, slower then before. What can I do to speed the shit up? Thanks in advance!

    Read the article

  • How to expand Raid 5 on ICH10 - Gigabyte ex58-ds4?

    - by NeverEatAlone
    I was wondering if there is a relatively simple way to expand my HD space. My setup is 4 x 640 GB drives. Motherboard has 4 ports on 1 controller and 2 ports on another controller, however they cant be joined. I would like to somehow get more store space in raid configuration. One scenario that I can see working is replacing one 640 drive for a 2TB drive. Waiting for Raid to rebuild. Rinse and repeat. However, I have no idea if I would be able to even see/access the new space. All alternatives / ideas are welcome. Thank you

    Read the article

  • Fusion 3, Windows 7, frequent blue screens

    - by kenny6127
    Is anyone else seeing this problem? A solution would be great (hey, it's Windows, it's gonna blue screen, just gotta deal with it), but it's also nice to know if it's something specific to my configuration, or if a lot of folks are having the same issue. Here's the details: * MacBook Pro 15" unibody, 4 GB, 2.4 GHz, 10.5.8 * Fusion 3.0.1 * Windows 7 Pro, 32-bit (clean install, not migrated from Boot Camp or anything) Blue screens happen nearly every time the MBP cover is closed for a short period of time (< 3 minutes). If the cover is closed for a half hour or longer, the VM is fine. I'm guessing it might be related to sleep, but that's hard to tell, and the Windows crash dump logs are pretty useless. Thanks!

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >