Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 233/416 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Tuning MySQL to consume less memory

    - by Alex
    I have a VM which has 2GB Ram, (full specs) And I am setting up a site which has one table in particular with over a million records. There's little or no usage of this particular database (perhaps once or twice a day) but simply running mysql grinds the whole server to a halt. I've looked through the top results but nothing is really denting the CPU however the memory seems to be the issue. The site isnt even live of taking requests yet. the memory situation looks like this: # free -m total used free shared buffers cached Mem: 2006 1880 126 0 3 53 -/+ buffers/cache: 1823 183 Swap: 2047 345 1702 Are there any good pointers to tune mysql to stop hogging the system memory? Thanks very much EDIT: (requested by 8bit): http://tny.cz/b41a0b12

    Read the article

  • Does Windows notice when a VM is moved around?

    - by Martin
    I'm thinking of migrating a Desktop machine (Windows XP) to a VM solution (VirtualBox or MS Virtual PC). The reason is that I need a new hardware anyways and I don't want to (cannot properly) reinstall all the "business" apps on there. So my plan goes as follows: I'll pull an image of the machine and restore it to a Virtual Machine using Acronis Universal Restore or some other tool that can restore to dissimilar hardware. (The process is largely irrelevant for this question I think.) Once I have this virtual machine properly running I'll move it to a new PC. So the question now is. Are there any caveats wrt. to Windows (XP?) being installed in a VM and the VM machine being moved around on different host computers? Can anything break in the OS inside the VM? Will there be troubles wrt. to Windows activation?

    Read the article

  • XCache permission issue

    - by Guilhem Soulas
    I successfully installed XCache on a Linux server where WHM/cPanel is installed too. However when I go to the admin interface it shows me only the cache for the owner of the folder (= a cPanel account). So I copied the admin interface on another cPanel account of the same server and I'm surprised about the memory allocation because it seems to use the memory I set in php.ini for EACH user. So if set the memory size to 64M and I have 10 users, doesn't it mean that XCache can take up to 640M? It's not the behaviour I want because I can't control the memory in that case! I would like to set 64M only for the entire server.

    Read the article

  • Backup of images

    - by Sam Kong
    I've just installed a Ubuntu for a file server. It will share a folder (samba) and employees of my company will save photos on that. Currently the total amount of the photos is about 100GB and every day 20MB will be added. My question is about backup plan. I want to backup the photos to a remote server using a cron job. I can think of 2 things. rsync git Image files won't be changed so rsync will do. But as people say, I must git all my data. What would you do? Thanks. Sam

    Read the article

  • esx 4 - c7000 - cisco 3020

    - by gdavid
    I have 4 blades with esx 4 installed in a HP c7000 enclosure. They have 6 cisco 3020 for hp switches in the backend. The plan was to use 2 switches for iSCSI traffic and the other 4 for data traffic. I am having a problem trunking the switches to our existing environment. The documentation i keep finding online has commands/features that are not available on the 3020 switch. Does anyone have this setup anywhere? I am looking to do Virtual Switch Tagging (VST) so i can control the machines vlan via the port groups. The only time any configuration worked for us was when our network team placed the command switchport native vlan x this setup only allowed vlan x to pass traffic and only when the port group was in vlan 0. Ideas? thanks for any help. -GD

    Read the article

  • connecting internet TV modem to repeater router, will it work?

    - by Sandro Dzneladze
    I've internet TV at home, it works via special modem which connects to router via Lan interface. I'd like to move Tv to a room which has no router. so i'd like to use wifi for internet TV. My plan is this, buy another wifi router, set it to repeat sygnal of primary router and attach this TV modem to repeater router via Lan interface. Will this work? I have limited understanding of how internet TV works, so I'm not sure if my strategy will work... does router have to have some special feature to allow this service? will my strategy work?

    Read the article

  • How Do You Stress-Test Your Hard Drives?

    - by MetaHyperBolic
    When looking for large new drives (= 1 TB) on newegg and the like, I note a number of reviews talking about drives being either D.O.A. or hitting the Click of Death (or even releasing the Magic Smoke) within a week or so of use. A portion of the reviews mention this phenomenon whether the drive in question is Western Digital or Hitachi or whatever. For those of you using Windows, what do you to: 1) Place a large initial stress on the drive to see if it can take it? For how long? 2) Test the drive afterwards (presumably with some sort of S.M.A.R.T. tool or others) to see if any negative changes have been noted? Note: This is one component of a larger plan for both high-availability and backups for my home data.

    Read the article

  • Moving the Windows Workflow database: safe enough?

    - by Chris
    We have a Windows Workflow service that is running in the IIS context and persisting to a database in between hydrates. It has the Tracking Service turned on, as well. We're looking to move the database to another server, and I wanted to make sure there are no gotchas in doing so. My current plan would just be to spin down IIS to stop all activity, back up the database, migrate the database, then flip connection strings in my application to point to the new one. My main concern was if existing workflows somehow need to stay on the same database or not, or if some activity needs to happen for them to work after the move. I wouldn't think so, but just planning ahead.

    Read the article

  • Cannot resolve a single A Record from client machine

    - by Alex
    I set up a simple Bind server on my VPS and it is working properly. The problem occurs with my local windows machines, which are connected to internet through the home router. I created an A-record named 'dev' and it is invisible from my local network for some reason, though people from other locations can resolve dev.mydomain.com. Ironically, dev.mydomain.com cannot be resolved for myself only. If I add another A-record, say, 'gamma' then it becomes visible from my local windows machines instantly. So this is just for that particular 'dev' name. The only difference is that I had dev.mydomain.com server on another IP but that was a month ago; all nameservers have been changed since then. I tried to reboot my router and flushed dns cache on windows machines: no result. Thank you in advance.

    Read the article

  • How to choose between Mac Laptop and Desktop

    - by Sakamoto Kazuma
    I am looking at getting a Mac soon for both iPhone development, and video editing. Should I be looking at a desktop or MacBook? I do not plan for the machine to move from my desk at home, so portability is not an issue, however it will be next to both a windows 7 desktop as well as a Linux laptop with dock. Main things that I'm concerned about is whether or not a MacBook has the power needed to do the video editing that I'm planning on doing and whether or not I can afford a desktop.

    Read the article

  • DNS Help: Move domain, not mailserver

    - by Preserved
    I'm in the middle of launching a new website for an already-in-use domain. The domain has a complicated email system so we'd like to move that over to the new server a bit later on. Currently the domain DNS is managed by the current webhost. I plan on moving the DNS management back to Network Solutions, then point the A record to the new website's IP. However, currently the DNS has the MX record the same as the A record. When NetworkSolutions is managing the DNS, and I point the A record to the new IP, then the MX record can't be the A record.. Right now: A Record mydomain.com points to IP address 198.198.198.198 MX record mydomain.com points to IP address 198.198.198.198 What I want: A Record mydomain.com points to IP address of new server MX record somehow points to current existing mailserver Does this even make sense?

    Read the article

  • Need help upgrading MacBookPro3,1 RAM to 4GB.

    - by Fantomas
    My questions are: 1) Where to buy it and what to buy? I have heard that this RAM is generic enough and it does not have to come from Apple. 2) Can I reuse my existing stick(s)? Would I have a single 2GB module, or 2 x 1GB modules? 3) If I have 2GB already, is it a good idea to have one old stick and one new one? Which one is better placed at the top and which one at the bottom? Let me know what questions you have. My computer's info: Hardware Overview: Model Name: MacBook Pro Model Identifier: MacBookPro3,1 Processor Name: Intel Core 2 Duo Processor Speed: 2.4 GHz Number Of Processors: 1 Total Number Of Cores: 2 L2 Cache: 4 MB Memory: 2 GB Bus Speed: 800 MHz Boot ROM Version: MBP31.0070.B07 SMC Version (system): 1.16f11

    Read the article

  • Running out of LowMem with Ubuntu PAE Kernel and 32GB of RAM

    - by magneticMonster
    I'm running a Java data import process on a 32-bit Ubuntu 10 PAE kernel machine. After running the process for a while, the oom-killer zaps my Java process. After some Googling and digging through docs, it looks like the system is running out of LowMem. I started the process for the third time and am watching free -lm show me Low: 464 386 77 with the free value (77MB) slowly decreasing. Why am I running out of lowmem and how do I increase it? Some details: $ cat /proc/sys/vm/lowmem_reserve_ratio 256 256 32 $ free -lm total used free shared buffers cached Mem: 32086 24611 7475 0 0 24012 Low: 464 407 57 High: 31621 24204 7417 -/+ buffers/cache: 598 31487 Swap: 2047 0 2047

    Read the article

  • Logging violations of rules in limits.conf

    - by PaulDaviesC
    I am trying to log the details of the programs that where failed due to the limit cap defined in the limits.conf. My initial plan was to do it using the audit system. The idea was to track the system calls related to limits in the limits.conf that where failed. However the problem with this approach is that , it is not possible to track the violations of cpu time, since that violation do not involve failure of system calls. In the case of CPU time , one thing happens is that the program which violated the cpu time will be delivered a SIGXCPU. So my question is how should I go about logging the programs that violated CPU time? Also is there any limits.conf specific logs available?

    Read the article

  • nslookup gives wrong ip for my domain

    - by Werulz
    I am having some problem in trying to setup DNS for my domain on my server. This tutorial normally works fine for me but when i tried to lookup my domain it gives the following output Server: 4.2.2.1 Address: 4.2.2.1#53 Non-authoritative answer: 119.100.79.64.in-addr.arpa name = server.leech4ever.com. Authoritative answers can be found from: The server and the address are wrong according to the tutorial Here is tutorial http://webcache.googleusercontent.com/search?q=cache:rR7Z4YU4GI0J:www.broexperts.com/2012/03/linux-dns-bind-configuration-on-centos-6-2/+broexperts+bind&cd=1&hl=en&ct=clnk&gl=mu /etc/hosts 127.0.0.1 localhost 64.79.100.119 server.leech4ever.com server /etc/resolve.conf search leech4ever.com nameserver 64.79.100.119 /etc/resolv.conf nameserver 4.2.2.1 nameserver 4.2.2.2 How to solve this problem guys.....The tutorial was flawless until i did a server restore

    Read the article

  • Which file system to choose from when formatting 1.5TB hard drive (hdd)

    - by MaxiWheat
    I plan to buy a 1.5TB hard drive soon. I would like to know which file system to choose from when I'm gonna format it. With FAT32, there is a limitation concerning the maximum file size (4GB) that bugs me since I might save large files such as DVD images which are over 4GB. On the other hand, NTFS allows me to save larger files, but seems less compatible with other OS than Windows and is also proprietary to Microsoft. Are there other alternatives ? Can you give me your advices ?

    Read the article

  • Route eth0 to internet traffic and eth1 to local traffic

    - by Romain Caire
    How can I route all my internet traffic on eth0 (everything except 192.168.1.0/24) and route my local traffic through eth1 (192.168.1.0)? Here is my attempt : # Flush ALL THE THINGS. ip route flush table main # Restore the main table. I flushed it because OpenVPN does weird things to it. ip route add 127.0.0.0/8 via 127.0.0.1 dev lo ip route add 0.0.0.0/0 via 164.67.195.1 ip route add 192.168.1.0/24 via 192.168.1.1 ip route flush cache

    Read the article

  • Dropbox context menu missing in OS X

    - by slhck
    Problem My Dropbox context menu is missing in OS X Snow Leopard (10.6.8). While the Dropbox service runs normally, Finder doesn't show the icons and also doesn't give me the ability to browse files on the website or copy the public link. What I've tried Removed ~/.dropbox and ~/Dropbox/.dropbox.cache Reinstalled Dropbox.app (both 1.4.7 stable and 1.5.0 experimental), went through the setup again Restarted Finder Logged out and back in All of these I've done over and over again, in random permutations. I've made sure that Dropbox appears in the Login Items under my Account (and I've never touched that) I don't know if ~/Library/Contextual Menu Items is missing the Dropbox plugin or if there shouldn't be one after all. In any case, I can't get the icons or the menu to appear.

    Read the article

  • How to benchmark kernel (-Os vs -O2)

    - by NightwishFan
    It seems logical to me that on a 64-bit kernel compiling it to optimize for size might help overall. (My distro of choice uses -O2) It has the benefits of more registers and memory and perhaps less cache contention than normal optimized code. I have a kernel compiled like this and it seems excellent. However my question is how can I prove this? I like using Phoronix for "real world" sort of benchmarks so I would prefer to test cases like that. What should I pick to test? Does anyone else have any alternatives? Thank you very much in advance.

    Read the article

  • Highly Available Web Application (LAMP)

    - by Anthony Rizzo
    I work for a small company who provides a web application for thousands of users. Earlier this year they had one server hosted one company. We recently acquired another server in a different location with the hopes of one day making this a redundant failover machine. I understand what to do with the mysql replication, I plan on using a master-master replication setup, and rsync to sync the scripts and files, however I am at a stand still about how to configure the fail-over. Ideally I would like the two machines to accept requests, like a round robin dns, however if one machine goes down I do not want requests to go that machine. All of the solutions I am come across assumes high availability of servers in the same location, these servers are in two completely different locations with different public ip address. Any help would be great. Thanks

    Read the article

  • nginx hashing on GET parameter

    - by Sparsh Gupta
    I have two Varnish servers and I plan to add more varnish servers. I am using a nginx load balancer to divide traffic to these varnish servers. To utilize maximum RAM of each varnish server, I need that same request reaches same varnish server. Same request can be identified by one GET parameter in the request URL say 'a' In a normal code, I would do something like- (if I need to divide all traffic between 2 Varnish servers) if($arg_a % 2 == 0) { proxy_pass varnish1; } if($arg_a % 2 == 1) { proxy_pass varnish2; } This is basically doing a even / odd check on GET parameter a and then deciding which upstream pool to send the request. My question are- What is the nginx equivalent of such a code. I dont know if nginx accepts modulas Is there a better/ efficient hashing function built in with nginx (0.8.54) which I can possibly use. In future I want to add more upstream pools so I need not to change %2 to %3 %4 and so on Any other alternate way to solve this problem

    Read the article

  • Unable to run Microsoft Office 2010 install file

    - by Len
    This problem began when I noticed that the icons in the Windows 7 task bar for MS Word and Outlook were generic. I rebuilt the icon cache. Still not the right icons, but not the generic "document" icons either, and both are identical (to each other). The two programs seem to be working OK. So then I tried to repair MS Office. I ran the setup file. It extracts the files, I get the splash screen, and then the message, "Setup has stopped working. A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available." with a "Close program" button. Microsoft does not notify me about a solution. What I have tried: 1. running two other copies of the setup program; 2. doing an in-place re-install of Windows 7.

    Read the article

  • Cached css/javascript files on Sun Java System Web Server

    - by Derp
    I'm doing front-end web development in a Solaris 10 / Sun Java System Web Server 7.0U2 environment. I have noticed that changes to static css or javascript files often do not take effect immediately, whereas changes to static html files always do. My best guess is that a default setting in the web server causes it to cache certain file types in order to provide reasonable performance out of the box. I don't have the admin server running--I'll need to edit the config files by hand. What change(s) can I make so that all of my css and javascript edits take effect immediately? Thanks!

    Read the article

  • wildcard host name bindings for multiple subdomains in multiple sites on IIS7 with a single IP address

    - by orca
    Situation: I have a single windows 2008 server with a single public IP address. I have multiple domains with wildcard A records pointing to the single IP address. I need each domain to be hosted by a different web site. (i.e. www.domain1.com by site domain1site) I need domain1.com to act like www.domain1.com I need each site to be able to have multiple subdomains (i.e. www.domain1.com, abc.domain1.com, xyz.domain1.com) Not relevant yet here it goes, I plan to handle each subdomain by a different application hosted in the same site (i.e. application /xyz in domain1site) However I found out that IIS7 does not support creating web sites with wildcard host name binding and setting it without any subdomain (i.e. domain1.com) does not work, even for www.domain1.com. Is there a simple solution? Does any IIS Extension like Application Request Routing provide such capability?

    Read the article

  • Need hard disk recommendation for linux home server.

    - by neotracker
    Hello, I'm planing to build a little linux homeserver. It will mainly be used for storage and maybe as an media pc. I plan to build a software raid5 with 4 1.5TB or 2TB hard drives. I already decided to use the Western Digital Caviar Green 1.5 TB drive, but then I read about some problems with the WD green series about many drives failing and that they are not recommended for raid anyway. Of course, I couldn't find much facts on the issues so I thought I just ask here ;-) What hard drives would you recommended for a software raid5 setup? As I only need it for storage, the whole thing doesn't have to be too fast. So I prefer a cheap price and silence to great performance.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >