Search Results

Search found 1815 results on 73 pages for 'andrew kelly'.

Page 22/73 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

  • Install Chromium OS to SECOND internal drive on EEE 901?

    - by Andrew Swift
    I am trying to install the Chromium OS on an EEE PC 901, and I have succeeded in using Image Writer for Windows 0.2r23 to copy the IMG file to an SDHC card. Since the OS speed is limited by slow card access, I'd like to install the Chromium OS on the second, unused, internal SSD Drive, D:. However, Image Writer doesn't allow me to restore an internal drive from an IMG file. To be clear: I boot in XP on C: then run Image Writer to install the Chromium OS. Does anyone know how I can either convince Image Writer that D: is a removable drive or know of alternative program that will let me restore D: from an IMG file (non-windows file system)?

    Read the article

  • htaccess rewrite different folder url, two index files

    - by Andrew
    I've been searching for awhile now and haven't found anything that comes close to what I'm trying to accomplish. Right now my URL's look like this: www.website.com/something which are using the root folder /index.php Now I have created plugins within folders: /plugins/PLUGINNAME/index.php I want to be able to have URLs like: www.website.com/plugins/PLUGINNAME/anything/iwant/here which are all using /plugins/PLUGINNAME/index.php and not the root directory index.php. Currently www.website.com/plugins/PLUGINNAME/ works, but anything after /PLUGINNAME/xxx defaults to the /index.php.

    Read the article

  • Replacing files in a folder structure with files from an unsorted folder

    - by Andrew
    I have over 50,000 PDFs organized into folders in a file called PDFACT. I needed to compress these files so I ran them through Adobe to batch compress them and this worked—except Adobe could only output the files without their folder structure. So basically I have 50,000 PDFs set up in a folder with hundreds of subfolders, and everything was organized. I ended up with one folder with 50,000 compressed PDFs in it, just in alphabetical order. Somehow I need to replace all the orginal PDFs with their compressed copies. Let me give an example: In the folder PDFACT we have the following file: C:\PDFACT\BIG DINNER\BILL\NEWESTBILL.PDF … and in the output folder that Adobe created we have just: C:\COMPRESSED_PDF_FOLDER\NEWESTBILL.PDF This copy is smaller then the one in PDFACT and has the same name but it is just lumped in with every other PDF. The folder structure and subfolders are gone. Is there any way to replace all the larger uncompressed PDFS inside the orginal folder structure with their now compressed counterparts?

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Win 7 Home Premium 64 bit running Cobian Backup 11 (Gravity)

    - by Andrew
    I'm really enjoying Cobian 11, but am fairly new to it. My question is this. I back up a pretty large folder on a regular basis. I started off by doing a Full backup, and have followed that monthly using differential backups. I was told that, to restore my computer after a crash, I need to copy back the original full backup AND copy back the latest differential over the full. That's fine. However, over the months there are quite a few large differential backups dated between the original Full one and the latest differential one. To free space on my backup HD, can I every now and then delete the differential backups that lie between the original Full and the latest differential, and just leave the original Full and the latest differential backup on the HD?

    Read the article

  • Writing an internal disk from IMG, what XP software to use?

    - by Andrew Swift
    I am trying to install the Chromium OS on an EEE PC 901, and I have succeeded in using Image Writer for Windows 0.2r23 to copy the IMG file to an SDHC card. Since the OS speed is limited by slow card access, I'd like to install the Chromium OS on the second, unused, internal SSD Drive, D:. However, Image Writer doesn't allow me to restore an internal drive from an IMG file. To be clear: I boot in XP on C: then run Image Writer to install the Chromium OS. Does anyone know how I can either convince Image Writer that D: is a removable drive or know of alternative program that will let me restore D: from an IMG file (non-windows file system)?

    Read the article

  • Add 802.11n to existing 802.11g environment

    - by Andrew Robinson
    I have a small home network that is currently running 802.11g. Two computers that are capable of 802.11n and two devices (a BlackBerry and a Skype phone) that are limited to 802.11g. I have a few neighbors running 802.11g but their signals are very weak. How big an impact will the two G devices have on N speeds? Will they pull the whole network back down to G? These two devices are hardly ever used where as the other N devices are heavily used. If I add an N router to the network (instead of replacing the G) and set my existing G router to use channel 1 with 20MHz bandwidth and then set the N router to use 6 & 11 for 40MHz will I eliminate the overlap and allow for full speed on both networks?

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • How to create a password-less service account in AD?

    - by Andrew White
    Is it possible to create domain accounts that can only be accessed via a domain administrator or similar access? The goal is to create domain users that have certain network access based on their task but these users are only meant for automated jobs. As such, they don't need passwords and a domain admin can always do a run-as to drop down to the correct user to run the job. No password means no chance of someone guessing it or it being written down or lost. This may belong on SuperUser ServerFault but I am going to try here first since it's on the fuzzy border to me. I am also open to constructive alternatives.

    Read the article

  • How to install rmagick on Ubuntu 10.04?

    - by Andrew
    Here's what I've done so far: sudo apt-get install imagemagick libmagickcore-dev This did not throw any errors, so I think that ImageMagick is installed fine. Then I tried installing the gem: sudo gem install rmagick This resulted in the following error: ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... yes checking for ImageMagick version >= 6.4.9... yes checking for HDRI disabled version of ImageMagick... yes checking for stdint.h... yes checking for sys/types.h... yes checking for wand/MagickWand.h... no Can't install RMagick 2.13.1. Can't find MagickWand.h. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out What do I need to do to install rmagick on Ubuntu 10.04?

    Read the article

  • Does /NOCANDY avoid any adware-related activities with OpenCandy?

    - by Andrew Grimm
    OpenCandy claims that using the /NOCANDY switch when using a OpenCandy-affiliated installer allows you to avoid opencandy. Should I take their word for it? If not, can anyone independent of OpenCandy and their affiliates verify that /NOCANDY works? Background: About to install WinSCP onto a fresh Windows installation, and found out that new versions have OpenCandy associated with their installer. For the sake of balance, here's a link to WinSCP's FAQ on OpenCandy. The claim about /NOCANDY working appears on WinSCP's web site, but the same boilerplate appears on other OpenCandy web sites. If the OpenCandy people are offended by the tag "spyare": sorry, but it's the main tag here, rather than "adware".

    Read the article

  • Block p2p downloading in my office?

    - by Andrew
    I work in an education office in a third world country. We pay for internet by the megabyte (no other choice) and have lately been using an incredible amount of bandwidth. This is because the office staff have found out about p2p sharing. As far as I know, Limewire is the only program they're using, but I'm sure it's just a matter of time before they discover the more general world of bittorrent. Using only a linksys router (that I could flash), is there any way for me prevent the office from destroying our bandwidth cap by downloading personal items (against policy). Even semi-fixes would be better than nothing.

    Read the article

  • Converting a multi-sheet per page pdf to single sheet per page

    - by Andrew Aylett
    My father-in-law usually creates his newsletters pre-'booked' -- that is, two columns with the pages in the right place such that you can print and staple the newsletter. Unfortunately, this month we're using a printer that wants an un-booked PDF -- with one page per page, in the right order. I can re-order pages easily enough, but is there any way to take a PDF which is essentially 2-up and split the pages?

    Read the article

  • Generate a limited amount of random network traffic between 2 hosts

    - by Andrew S
    I'm trying to find a utility that will allow me to generate a constant flow of random network traffic at a specified rate between 2 hosts. The utility needs to run on Windows and OSX. I've tried iperf but it seems to be more oriented toward short-term testing/statistics and it really taxes the CPU even at slower rates. I want something that will generate traffic for a few weeks at say 10Mbps while I use other tools to monitor the impact of that level of traffic on the network.

    Read the article

  • How to define template for org-mode HTML export?

    - by Andrew-Dufresne
    I am using org-mode to generate html pages from my notes. I used Publishing Org-mode files to HTML to setup blog system. I have defined an export template. But to use it I have to add following line in top of my every org file inside my notes project. #+SETUPFILE: ~/.emacs.d/org-templates/level-0.org Is there a way to set this up in .emacs or to customize an org-mode variable so that I do not have to place this line in every file? According to org-mode manual, #+SETUPFILE is an in-buffer setting. Does this mean I cannot define it globally for all org files? These two answers on SU tell how to customize style for HTML export. But my template file contains other settings besides CSS style. So only customizing style won't do it for me.

    Read the article

  • Exim: How to turn off DKIM for forwarded mail?

    - by Andrew
    I have DKIM configured in Exim for outgoing mail, as per the documentation. Exim signs all outgoing mail. But some of that outgoing mail is forwarded, thanks to a users .forward file. This is a problem for me, because some of those messages are spam (my exim configuration does not do any verification) and I don't want to take responsibility for them. But I can't figure out how to configure Exim not to sign these messages. My configuration is basically the Debian Squeeze default, with a few DKIM_* macros set. I can post more details, but I think seeing any example of conditional DKIM signing would set me right.

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • How to set the subversion repository root in Debian?

    - by Andrew Whitehouse
    I have just switched from an old Fedora Core server to Debian Linux v5.0.4. Having migrated the old repository and configured access through svn+ssh, I now want to be able to access the repository with the same path on the client as before. On Fedora you could specify the repository root with "svnserve -r " but having checked the config files and svnadmin options I'm stuck as to how I can do this on Debian. Is there a way to set the repository root in Debian?

    Read the article

  • Stop Windows Automatic Update in boot menu or start up-sequence

    - by Andrew
    My girlfriend has a corrupted hard drive running Windows Vista. She is getting a new hard drive and has also purchased an external hard drive to back up her data. However windows downloaded an automatic updated and keeps getting held up and restarting when it trys to apply the update. Is there a way she can disable this from the boot menu or start-up sequence?

    Read the article

  • SUSE Linux and Xen on Mac Pro - How best to prepare and configure?

    - by Andrew J. Brehm
    This is a longwinded question, so bear with me please. I have a 2009 Mac Pro with two CPUs and 8 GB of memory which is totally overpowered for Mac OS X. I am also in the process of slowly moving away from Mac OS X as my main platform. Since the Mac Pro is really new and nice I have finally decided to use it for another platform. I am familiar with Linux and SUSE Linux. Ultimately I want to run some version of SUSE Linux (recommend one, doesn't have to be free as in no money) and Xen. Here are the individual questions: Which version of SUSE Linux should I use and how do I install it on a Mac Pro? Note that the distribution must come with usable Xen. I am willing to pay. I assume Xen will work on my computer (it has VT support etc.). Is my assumption correct? I want to run Windows 7 and another instance of SUSE Linux under Xen. Is it possible to run Mac OS X Server under Xen (on a Mac Pro)? Which email client under Linux supports imap is is best-suited for integrating with MobileMe? Does SUSE Linux support the ATI Radeon HD 4870 and the Apple Cinema Display 1920 x 1200 resolution? What else should I take into account?

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Bare minimal Chef provisioning and deployment?

    - by Andrew McCloud
    I've read the documentation on Chef twice over. I still can't wrap my head around it's concept because they skip but fundamentals and jump to complex deployments with chef-server. Using chef-solo and possibly knife, is there a simple way to provision a server and deploy? I may be wrong, but it seems like with my cookbooks prepped, this should be very simple. knife rackspace server create --flavor 1 --image 112 That provisions my server. I can optionally pass --run-list "recipe[mything]", but how do my cookbooks in ~/my_cookbooks actually get on the server? Do I have to manually transfer them? That seems counterproductive.

    Read the article

  • Can I list file names (or their parent directories) that were recently deleted using rm in OS X?

    - by Andrew Grimm
    Is it possible to find out which files and directories have recently been deleted by rm in OS X? Or failing that, is it possible to find which parent directories have had files or directories within it deleted? The OS version is Snow Leopard. Background: Last night, rvm (ruby version manager) did rm -rf of the ~/ruby directory from the home directory. (This bug has since been fixed) Ideally, I'd like to know what files within the ~/ruby directory were deleted, but failing that, I'd like to know if rvm deleted anything outside of ~/ruby . In case anyone's wondering about backups...: Just about everything within ~/ruby is a git project that has a remote repo, and I have a fairly recent Time Machine backup (only 20 days old).

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >