Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 1790/3118 | < Previous Page | 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797  | Next Page >

  • Is there a way to log commands that a user runs in Windows 7?

    - by camster342
    I manage a large enterprise environment, and while we try to advise users not to, there are inevitably users that need to have local admin access to their machines. The problem is that some of these users like to "fiddle" and sometimes screw up their machines in "wonderful" ways. Is there an easy way to log what a user does on a machine, specifically in the command prompt? Maybe there is 3rd party tools I could use to log this information? With Linux that I used to use in past ages, you could look at a users bash history file to see what commands they have run. While I realise that specific log could also be altered by the user if they wanted to cover their tracks, that is the sort of log I'm looking for. If there are other ways I can also log what other system configuration type changes they make as well (not necessarily command line based), that's also useful. I know about event/system logs and so on, but that doesn't necessarily catch all the information I need to figure out how the user has buggered their machine this time.

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • ssh Password-less login to multiple machines when you already have one

    - by tandu
    I'm a little bit confused about setting up a password-less login for multiple machines to begin with, but I think I could do it from scratch. The problem is I already have it set up for one machine and I don't want that to be blown away when I try to set it up for the other machine. Let's clarify: Machine A: the machine I'm connecting from Machine B: the machine I'm connecting to. Password required Machine C: the machine I'm connecting to. Password-less ssh I have read some tutorials on setting up password-less ssh to a certain site, but they usually start with "move id_rsa out of the way so it doesn't get blown away," but then at the end of the tutorial it's not moved back. If I had no help at all, here is what I would do: Log into B ssh-keygen -t rsa -f ~/id_rsa.other scp id_rsa.other.pub A:~/.ssh echo "Host A \n Identity File ~/.ssh/id_rsa.other" > ~/.ssh/config (Note that I realize these commands may not be exactly correct, but this is just the idea). What I'm not quite clear on is if I need to update the config for A, B, or both. I'm fairly certain to do a password-less login from A to B, it is A that needs the public key .. but I also suppose I need B to use the correct id_rsa file for that public key. Finally, I don't want the password-less login for C to be affected at all .. it's using id_rsa. Am I going wrong anywhere?

    Read the article

  • Packet flooding while configuring a Debian L2TP/IPSec client?

    - by Joseph B.
    I'm currently at my wits end trying to configure an L2TP over IPSec VPN connection on my Debian using openswan and xl2tp box connecting to a server of unknown configuration. I've managed to successfully establish the connection and everything appears to be working well until I attempt to set the VPN connection as my default route, at which point I see a massive flood of packets simultaneously being transmitted (on the tune of ~1.5 GB in about 2min) until the server drops my connection. Prior to this network traffic on all my interfaces is minimal. According to iftop the majority of this traffic appears to be coming out of port 12, although I can't seem to figure out how to finger a specific process. If I instead just route traffic destined for 74.0.0.0/8 through it I'm able to access Google's servers through the VPN without issue. My xl2tp.conf file is: [lac vpn-nl] lns = example.vpn.com name = myusername pppoptfile = /etc/ppp/options.l2tpd.client My options.l2tpd.client file is: ipcp-accept-local ipcp-accept-remote refuse-eap require-mschap-v2 noccp noauth idle 1800 mtu 1410 mru 1410 usepeerdns lock name myusername password mypassword connect-delay 5000 And my routing table looks like: Destination Gateway Genmask Flags Metric Ref Use Iface 10.5.2.1 * 255.255.255.255 UH 0 0 0 ppp0 10.0.50.0 * 255.255.255.0 U 0 0 0 eth0 10.50.0.0 * 255.255.0.0 U 0 0 0 eth0 10.0.0.0 * 255.255.0.0 U 0 0 0 eth0 192.168.0.0 * 255.255.0.0 U 0 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default * 0.0.0.0 U 0 0 0 ppp0 I'm seeing absolutely nothing in auth.log and syslog during this time and can't seem to find any other log files it might be writing to. Any suggestions would be appreciated!

    Read the article

  • linux shutdown hang with wifi cifs mounts

    - by Sirex
    Since fedora 15 (and now with 16) it seems that wireless clients take a long while to shutdown when they have network filesystems mounted at shutdown time. I've pushed out a cifs mount via puppet, and all clients have it, including those on wireless. If say a laptop is on a wired connection it shuts down just fine, but if its on the wifi at the time (and no wired connection) it'll hang at the fedora f logo. I'm not sure if its indefinite or just a really long while, but ill give it a test when i shut this machine down in a second. Needless to say its pretty annoying, so is there a way of causing the machine to shutdown even if network connectivity has been lost at unmount time, -- or an official way to reorder events so the wireless card is kept up until after the unmount happens during the shut down process (short of writing a custom script for shutdowns which is a bit of a kludge) ? It does this on multiple machines, and all started doing it when we went from fedora 14 to 15. It was such an obvious issue i'd kind of assumed someone must have reported it or there was an easy fix, but i've not discovered anything yet. Additional info: I can confirm that manually unmounting the mounts then shutting down (sudo shutdown or the xfce shutdown button) will shutdown just fine, it only hangs if the mounts are still mounted The puppet config that sets the mount looks like this (now with the _netdev entry that is indeed pushed to clients successfully, but makes no difference): file { "/mnt/share": ensure = directory,} mount { "/mnt/share": atboot = true, ensure = mounted, remounts = false, fstype = cifs, device = "//srv/share", options = "user,gid=shareusers,uid=${user},file_mode=0700,dir_mode=0700,credentials=/root/.smbcreds,_netdev", require = [ File["/mnt/share"], Group["shareusers"] ], } }

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • How do I set up Grub properly to quad-boot Windows, Mac OS X, Linux, and FreeBSD?

    - by Joe
    Grub has gone completely insane on me. My quad-boot system was working great up until I upgraded Ubuntu to 12.04. Since Ubuntu overwrote the Grub stuff I had to repair it with my Mac OS X and FreeBSD entries. After this, trying to boot Mac OS X gave me the error "couldn't open file" and FreeBSD gave the error "no such partition". Windows and Ubuntu worked fine. So I tried repairing again because I figured something must've gone wrong in the install process. Then only Ubuntu would boot. Trying to boot Windows would give me the error "no argument specified". I tried repairing Grub once again, since I seemed to be getting different results each time. This time, Ubuntu no longer appeared in the Grub menu, and the errors for the other OSes were the same. So I booted into the Ubuntu 12.04 live CD and ran Boot-Repair with recommended settings. Now Grub is completely skipped and Windows boots up. I have absolutely no idea what is going on or why I get different results every time I reinstall Grub. Here is how my partitions are set up: sda1 - Storage drive, sdb1 - Windows, sdb2 - Mac OS X, sdb3 - FreeBSD, sdb4 - Extended, sdb5 - Ubuntu, sdb6 - Shared storage, sdb7 - Shared Storage, Here's my grub.cfg file: grub.cfg

    Read the article

  • VMWare Server modifying files related to paused VMs, is this expected?

    - by David Spillett
    While refreshing the backup of a VM used for testing, I experienced the following warning from tar: tar: /VMsR0/cli_noddyco_test/VM2K8_32_web.vmem: file changed as we read it The VMs in question were paused at the time. My first though was that I'd mixed up the machines and was trying to backup something that was still actively running. To be sure I unpaused and properly shut down the VM, and the vmem files that tar reported changing vanished as I would expect. Is it normal for VMWare Server to touch or alter files for paused VMs like this, or is there likely something amiss with our setup? If this is expected behaviour, is just touching the vmem file (and so altering the last modification date without actually changing content)? If it is normal for files relating to paused VMs to be updated I shall have to revise our backup procedures to make sure the VMs are fully shut down fully rather than just pausing them (this isn't a problem, but it seems strange and I'd prefer to understand what VMWare is doing and why instead of just dismissing it as "one of those things" and working around it). For further detail: the host in question is VMWare Server version 2.0.2 running on 64-bit Debian/Lenny, and that VM did not have a snapshots at the time. We have backed up paused VMs this way in the past with no such warnings from tar.

    Read the article

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • How to write rules for persistent net names?

    - by ndemou
    I know that a process generates persistent network card names based on rules found in /lib/udev/rules.d/75-persistent-net-generator.rules. I also know how to completely disable this process with a simple echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules but I've read that I "could also write my own rules file to give the interface a name — the persistent rules generator ignores the interface if a name has already been set" (/etc/udev/rules.d/README confirms that this is possible). Do you have any pointers to documentation about how to write such rules? (I mostly care about Debian/Ubuntu and a bit less for CentOS) As a specific example of why I want to write custom rules: I have two identical servers with one onboard LAN and one PCI LAN. In case of HW failure I want to be able to move disks from HW#1 to HW#2 and it's important for eth0 to continue pointing to the onboard card and eth1 to the PCI card (no one wants to mess with cabling in the middle of a HW failure panic). My current workaround works but is a lot of work[1] so I wonder if writing custom rules would allow me to express something simple like this: cards with MAC A or B should be named eth0 cards with MAC C or D should be named eth1 follow default naming scheme for anything else [1] install the OS in HW#1 and keep a copy of /etc/udev/rules.d/70-persistent-net.rules. Move the disks to HW#2 and keep a second copy of the same file. Concatenate the two copies and manually edit the NAME="ethX" part. Replace /etc/udev/rules.d/70-persistent-net.rules with my version. Finally disable auto-creation of a new 70-persistent-net.rules using echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules

    Read the article

  • REMOTE_USER not getting set?

    - by landed
    I am trying to setup LDAP Authentication in Joomla using a plugin called JMapMyLDAP (in fact 4 plugins each doing a different job). I need to pull a part of a string out of the server variable REMOTE_USER and this should be visible (we see here http://timplummer.com.au/4-how-to-integrate-joomla-3-with-active-directory-using-ldap.html) in phpinfo(); The issue is that REMOTE_USER is not set or at least not appearing. A few things to note (if you don't mind) here- conceptually I am not really understanding authentication as a whole subject it appears to be vast despite my years working in websites. Yes I used asp and built php pages to check a user is who they say they are with a token(/session?) that was given to just them and then they are identified when a stateless request is made to the server. Thats my level of understanding. This sounds different to the basic authentication in apache where a password sits in a file and a username and the user needs to login to a basic form to get access to the folder/docs this is via an .htaccess file. Ok so with the LDAP to work I need to get REMOTE_USER this sounds very reasonable as how else do we know is making the request. Thank you.

    Read the article

  • Windows 7 x64 support for Intel GMA 3650 (or GMA 3600)

    - by Loom
    I recently purchased an Intel D2700 MUD motherboard and I cannot find drivers for the Win7 x64 integrated graphics (Intel GMA 3650 aka PowerVR sgs545). The accompanying CD contains Win7 x32 version only. When I run it I got an error: This computer does not meet the minimum requirements for installing the software. I tried to use online utility Intel Driver Update Utility Graphics. I used Chrome, Firefox, Internet Explorer without success. First, UAC prompt appear, and then endlessly spinning progress-bar with text "Analyzing computer...". The text in UAC prompt is: Program file name: System Requirements Lab Verified publisher: Husdawg, LLC I downloaded this utility (intel_srldetect_4.5.5.0) and started it from my hard disk. I got an error: A network error occured while attempting to read from the file: C:\Users\Loom\Downloads\SystemRequirementsLab_intel_4.5.5.0.msi Standard VGA driver works for this video card but without hardware acceleration: Hardware acceleration is either disabled or not supported by your video card driver, which could slow game performance. Make sure you have the latest video card driver installed and that hardware acceleration is turned on. Where I can get appropriate driver?

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Computer slow after installing 32GB RAM

    - by John Gilmore
    I'm currently running very large network simulations for my PhD research, for which I need lots of RAM. I have a Core i7 2600K processor with a Gigabyte GA-Z68AP-D3 motherboard, running Windows 7 professional 64bit. I bought the system with 8GB (2x4GB) DDR3 1600 MHz Corsair Vengeance RAM and the system ran like a dream. I'm planning to upscale my simulations so I removed the 2x4GB RAM and installed 4x8GB DDR 1600 MHz Corsair Vengeance RAM. When I rebooted the system, boot time was much longer than usual (10 mins just to get to login screen). After logging in, the whole system was unresponsive. I tried playing some games (Bioshock 2), but it was unplayable. I've not had this problem before and I have an ATI Radeon HD 5850 graphics card, so that's not the problem. The only thing that's changed is the RAM. I've looked through the specifications of Windows, my motherboard and my CPU and they all state that 32GB of RAM is supported. Does anyone have an idea of what's going on? Any help would be greatly appreciated.

    Read the article

  • mod_rewrite redirect subdomain to folder

    - by kitensei
    I have a wordpress blog at the url http://www.orpheecole.com, I would like to setup 3 subdomains (cycle1, cycle2, cycle3) being redirected to their folders (1 subdomain = 1 wp blog, no multisite enabled) The file tree looks like this: /var/www/orpheecole.com/ /var/www/cycle1.orpheecole.com/ /var/www/cycle2.orpheecole.com/ /var/www/cycle3.orpheecole.com/ the following .htaccess try to redirect to /var/www/orpheecole.com/cycleX instead of its own directory, but id it's possible i'd rather redirect every subdomain to its own www folder. my sites-enabled file for main site is # blog orpheecole <VirtualHost *:80> ServerAdmin [email protected] ServerName orpheecole.com ServerAlias *.orpheecole.com DocumentRoot /var/www/orpheecole.com/ <Directory /var/www/orpheecole.com/> Options -Indexes FollowSymLinks MultiViews Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/orpheecole.com-error_log TransferLog /var/log/apache2/orpheecole.com-access_log </VirtualHost> and the .htaccess located on /var/www/orpheecole.com/ looks like this <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} !^www.* [NC] RewriteCond %{HTTP_HOST} ^([^\.]+)\.orpheecole\.com$ RewriteCond /var/www/orpheecole.com/%1 -d RewriteRule ^(.*) www\.orpheecole\.com/%1/$1 [L] # BEGIN WordPress RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress </IfModule> I tried to remove wordpress directives but nothing change, and the rewrite mod is enabled and working.

    Read the article

  • debian lenny email server

    - by Dal
    Hi I am a newbie and set up a debian lenny at home and set up the web and email server in the default installation. I followed the instructions for Exim and ran dpkg-reconfigure exim4-config and set it up for mydomainhere.com. I created a one line message file and attempted to test exim by running the command exim [email protected] < msgfile. I also tried using exim4 Exim but i get same error -bash: Exim: command not found. Obviously I am ignorant on how to run exim and test. I also tried to run a php file that sends a test mail and had no success. That script is tested and works fine if I send it from my hosting isp on a different domain. So I know the php script is good. I set up the debian system behind a netgear firewall and uses 192.168.1.x ip . The web server works great and users can visit my site. But I am lack the knowledge on how to get the email working. Appreciate is someone can guide me.

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • Cannot make bind9 forward DNS query to subdomain unless recursive enabled

    - by PP.
    I am trying to develop my own dynamic DNS. I'm running my own custom DNS for the subdomain on port 5353. ASCII diagram: INET --->:53 Bind 9 --->:5353 node.js | V zone_files I have example.com. The node.js DNS is for dyn.example.com. In my /etc/bind/named.conf.local I have: zone "example.com" { type master; file "/etc/bind/db.com.example"; allow-transfer { zonetxfrsafe; }; }; zone "dyn.example.com" IN { # DYNAMIC type forward; forwarders { 127.0.0.1 port 5353; }; forward only; }; I've even gone so far as to add a NS in my example.com zone file: $TTL 86400 @ IN SOA ns.example.com. hostmaster.example.com. ( 2013070104 ; Serial 7200 ; Refresh 1200 ; Retry 2419200 ; Expire 86400 ) ; Negative Cache TTL ; NS ns ; inet of our nameserver ns A 1.2.3.4 ; NS record for subdomain dyn NS ns When I attempt to get a record from the subdomain server it doesn't get forwarded: dig @127.0.0.1 test.dyn.example.com However if I turn recursive on in /etc/bind/named.conf.options: options { recursion yes; } .. then I CAN see the request going to the subdomain server. But I don't want recursion yes; in my Bind configuration as it is poor security practice (and allows all-and-sundry requests that are not related to my managed zones). How does one forward (proxy) zone queries for just one zone? Or do I give up on Bind altogether and find a DNS server that can actually forward specific queries?

    Read the article

  • Install Debian stable linux ISO from USB to dual boot Windows

    - by tgkprog
    I want debian as dual boot with my windows vista, Free'd up 50GB in my d drive. Plan to use 40 for debian install, 6GB for swap space Have a 16GB USB drive Downloaded http://unetbootin.sourceforge.net/ Downloaded DVD files of stable debian-7.0.0-amd64-DVD-1.iso ( debian-7.0.0-amd64-DVD-2.iso and 3) After I choose HD install, unetbootin says place the ISO in the same place. but I have 3. do i need to merge them? if so any freeware to do that? can i do it with 7zip? when I extract with 7 zip there are classes between the 3 ISO files. Just over write? Options to merge (format etc for 7zip) ? Or I must use I tried to keep the 3 files with the other unetbootin files but get an error msg Files I have on my USB 06/30/2013 11:44 PM 2,835,648 ubnkern 06/05/2013 12:14 AM 3,998,007,296 debian-7.0.0-amd64-DVD-1.iso 06/04/2013 03:30 PM 4,696,872,960 debian-7.0.0-amd64-DVD-2.iso 06/05/2013 01:25 AM 4,698,955,776 debian-7.0.0-amd64-DVD-3.iso 06/30/2013 11:45 PM 6,530,278 ubninit 06/30/2013 11:46 PM 155 syslinux.cfg 06/30/2013 11:46 PM 60,928 menu.c32 also i can only copy above files if i format my USB as NTFS On FAT32 says too large to copy .iso How do I get around that? My internet needs a login so cannot do net install

    Read the article

  • mysql - moving to a lower performance server, how small can I go?

    - by pedalpete
    I've been running a site for a few years now which really isn't growing in traffic, and I want to save some money on hosting, but keep it going for the loyal users of the site and api. The database has one a nearly 4 million row table, and on a 4gb dual xeon 5320 server. When I check server stats on this server with ps -aux, i get returns of mysql running at about 11% capacity, so no serious load. The main query against mysql runs in about 0.45 seconds. I popped over to linode.com to see what kind of performance I could get out of one of their tiny boxes, and their 360mb ram XEN vps returns the same query in 20 seconds. Clearly not good enough. I've looked at the mysql variables, and they are both very similar (I've included the show variables output below, if anybody is interested). Is there a good way to decide on what size server is needed based on what I'm coming from? Is it RAM that is likely making the difference with the large table size? Is there a way for me to figure out how much ram would be ideal?? Here's the output of the show variables (though I'm not sure it is important). +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + - For some reason, that table formats properly in the preview, but apparently not when viewing the question. Hopefully it isn't needed anyway.

    Read the article

  • Windows 7 automatically logs out when logging in

    - by Luke
    A Windows 7 HP x64 computer is set to automatically log in (no password), but once it starts logging in, it starts to load the desktop after the welcome screen, but before icons or background images are loaded, it goes to the Welcome screen saying 'Logging Off'. I can log in with Safe Mode, and I ran a couple different virus scans, with no detections. I also tried checking the userinit.exe file in System32 (as suggested by MANY users for Windows XP), but it's the same version as a working system. I also checked the registry under HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon for the SHELL and UserInit values, but they look normal. I tried to disable all startup items (through MSCONFIG) to select Diagnostic boot, but then I get a blue screen about the video driver not loading. Any other ideas? EDIT I created a new user, and it could log in no problems. I am thinking it's the NTUSER.DAT file. I just renamed it to NTUSER.DAT.old, then tried logging in as the problem user. I could log in, but as a TEMP profile. His profile folder is now C:\Users\TEMP, and his old folder is still accessible but in the wrong location. EDIT 2 I can't seem to turn off the TEMP profile, so I'm open to other suggestions. Copying the folders (i.e. Documents, Music, etc) does not work, as it creates an additional TEMP.000 then TEMP.001 folder each time the user logs in.

    Read the article

  • nginx won't serve an error_page in a subdirectory of the document root

    - by Brandan
    (Cross-posted from Stack Overflow; could possibly be migrated from there.) Here's a snippet of my nginx configuration: server { error_page 500 /errors/500.html; } When I cause a 500 in my application, Chrome just shows its default 500 page (Firefox and Safari show a blank page) rather than my custom error page. I know the file exists because I can visit http://server/errors/500.html and I see the page. I can also move the file to the document root and change the configuration to this: server { error_page 500 /500.html; } and nginx serves the page correctly, so it's doesn't seem like it's something else misconfigured on the server. I've also tried: server { error_page 500 $document_root/errors/500.html; } and: server { error_page 500 http://$http_host/errors/500.html; } and: server { error_page 500 /500.html; location = /500.html { root /path/to/errors/; } } with no luck. Is this expected behavior? Do error pages have to exist at the document root, or am I missing something obvious? Update 1: This also fails: server { error_page 500 /foo.html; } when foo.html does indeed exist in the document root. It almost seems like something else is overwriting my configuration, but this block is the only place anywhere in /etc/nginx/* that references the error_page directive. Is there any other place that could set nginx configuration?

    Read the article

  • Is it safe to force a dismount to format a volume in Windows?

    - by sammyg
    I am using format command in cmd to format a USB flash drive. M:\>format /FS:FAT32 /Q Required parameter missing - M:\>format M: /FS:FAT32 /Q Insert new disk for drive M: and press ENTER when ready... The type of the file system is FAT32. QuickFormatting 14999M Format cannot run because the volume is in use by another process. Format may run if this volume is dismounted first. ALL OPENED HANDLES TO THIS VOLUME WOULD THEN BE INVALID. Would you like to force a dismount on this volume? (Y/N) y Volume dismounted. All opened handles to this volume are now invalid. Initializing the File Allocation Table (FAT)... Volume label (11 characters, ENTER for none)? Format complete. 14,6 GB total disk space. 14,6 GB are available. 8 192 bytes in each allocation unit. 1 917 823 allocation units available on disk. 32 bits in each FAT entry. Volume Serial Number is E00B-2739 M:\> Is it safe to force a dismount like this, and make the handles invalid?

    Read the article

  • Copying files to my laptop makes them locked

    - by John
    When I save files from e.g. remote desktop or from an email (outlook) attachments, or from skype even to my local machine they show a locked Icon on the file. Then e.g. SQL Server doesn't let me restore backups as it says the operating system doesn't have access to the file. I've had success fixing this by setting the ownership of the parent folder to my user and then let it apply to sub folders. Also sometimes I need to click - Proerties - Security - Advanced - Change Permmissions, then check "change child permissions..." and apply on the parent dir. I'm using Windows 7 64 bit Proffessional, on HP Probook 4530, and I have a administrator user. This is a real pain to do everytime. I suspect it might be because of HP software that came with the laptop, I think there is drive encryption as part of the protect tools. Although I'm hoping there's something in windows i can set to change the behaviour to not lock these files.

    Read the article

< Previous Page | 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797  | Next Page >