Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 1034/1338 | < Previous Page | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041  | Next Page >

  • MAMP Pro .xip.io, fixing urls with htaccess

    - by user3540018
    I've got all my websites set up with MAMP Pro. For instance, I got it set up, so when I go to example.com, the browser displays the website that's set up on my iMac. Now, I wanna get MAMP Pro to work so I can view my website on my other computers/devices (which are all hooked up on the same network.) So far all I had to do is check the checkbox "via Xip.io (LAN only)", and now I can view my website on my other computers/devices within my LAN by simply going to example.com.10.0.1.13.xip.io. Problem is, whenever I'm on this other computer/device, when I click on the links, I get 404 error. ie. whenever I go to example.com/news, I get the 404. But when I go to example.com.10.0.1.13.xip.io/news, THEN I get the right page. So in order to solve my problem I need to rewrite the urls. So whenever someone clicks on a link ie. goes straight to example.com/news, he'll go to example.com.10.0.1.13.xip.io/news. I don't want to change all the links in my MySQL file, but I believe I can do it simply with the htaccess file. I've opened the htaccess file and added the last two lines, but it just doesn't work. <IfModule mod_rewrite.c> RewriteEngine On # Send would-be 404 requests to Craft RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/(favicon\.ico|apple-touch-icon.*\.png)$ [NC] RewriteRule (.+) index.php?p=$1 [QSA,L] RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com.10.0.1.13.xip.io/$1 [R=permanent,L] </IfModule> Or perhaps I don't need to change the htaccess file, is there something that I could be missing in the MAMP Pro settings, or perhaps a MAMP extension that I need?

    Read the article

  • Running Debian as guest operating system on a Hyper-V VM

    - by kce
    Hello. Layer-9 considerations are prompting a migration from Citrix XenServer to Hyper-V as our shop's virtualization platform of choice. This will require me to migrate our existing virtual machines from XenServer to Hyper-V. A hand full of these VMs are running Debian. Unfortunately, Debian does not seem to be on the list of approved/supported guest operating systems. In fact it seems that running Debian as a guest operating system of is rather difficult, although apparently not impossible. I have two interrelated questions: Does anyone have any experience running a Debian guest on Hyper-V? Is it one of those things where it just will not work at all or is more along the lines of "it will probably work fine, but we won't support it". Any experience here, positive or negative, would be helpful. How much of a bad idea is it to deviate from Hyper-V's list of supported guest operating systems? Again, is it either basically asking for Bad Things (TM) to happen or is just another instance of "it will probably work fine, but we won't support it"? Or is it somewhere in the middle? Thank you.

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • Ubuntu 64bit Xen DomU Issues after upgrade from Karmic to Lucid

    - by Shoaibi
    I was upgrading my servers today and it all went fine except the last machine which has the following issues: [Resolved using http://www.ndchost.com/wiki/server-administration/upgrade-ubuntu-pre-10.04#post-1004-upgradefinal-steps] No login prompt on console Done. Begin: Mounting root file system... ... Begin: Running /scripts/local-top ... Done. [ 0.545705] blkfront: xvda: barriers enabled [ 0.546949] xvda: xvda1 [ 0.549961] blkfront: xvde: barriers enabled [ 0.550619] xvde: xvde1 xvde2 Begin: Running /scripts/local-premount ... Done. [ 0.870385] kjournald starting. Commit interval 5 seconds [ 0.870449] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. Also tried by pressing ENTER and CTRL+C many times, no use. Resolved: [/tmp was mounted as noexec, changing that fix it]: I get errors when i try to re-install udev in single user mode: Unpacking replacement udev ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Setting up udev (151-12.1) ... udev start/running, process 1003 Removing `local diversion of /sbin/udevadm to /sbin/udevadm.upgrade' update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-25-server /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/fixrtc: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/ntfs_3g: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/resume: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/nfs-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/panic/console_setup: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/all_generic_ide: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/blacklist: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-bottom/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-bottom/ntfs_3g: Permission denied

    Read the article

  • Scheduling Automatic Backups for Virtual Private Web Server running CENTOS 6.3 and WHM

    - by Oliver Farrell
    I'm pretty new to administering my own VPS - but thus far am finding it quite a compelling experience. There's something quite refreshing about having complete control over everything it does. One thing that I would like to look at is a suitable backup solution (a few times a day). My current setup is as follows: I'm running a CENTOS 6.3 VPS with a single 25GB hard drive solely for the purpose of hosting websites. I'm using WHM & cPanel for administering them. I now plan on adding an additional hard disk and hooking it up to my VPS. What I'm not sure about is how I get the two disks talking and get the backup process going. I'm not a seasons SSH-er so don't really know where to start. I'm hosting with Serverlove (one of the best hosting providers I've used) and am provided with a number of unique identifiers for each hard disk so I imagine these may play a part in linking them together. I appreciate that this is a little vague (I'm clutching at straws) but any assistance is very much appreciated.

    Read the article

  • Can't pin modified shortcuts to the Windows 7 task bar

    - by Coder
    I have a shortcut to a .bat file which I pin to the task bar using a workaround by using another icon and this seems to work. Now I make a copy of that shortcut, point it to a different .bat file, rename it, and I can't pin this one to the task bar. I have to find some other new unused icon to pin, pin it, then modify it manually. The other problem this causes is that windows seems to track which icons were pinned even if they are modified after the fact. As such, if I use media player as my dummy icon, pin it, then alter it's name and shortcut to point to a .bat file, I can't re-pin windows media player and if I select unpin from the windows media player, it unpins my shortcut to my .bat file. I can't believe how ridiculous this is. Is there a way to pin anything I want to the taskbar (ie. .bat file in my case) that does not cause problems like this? Is there an easy way I can copy an existing shortcut and modify it and re-pin it to the taskbar? The reason I want to copy it is because I start a .bat file (in particular git bash) and I set properties on the window like quick edit, increase the screen buffer and set it's position and size manually. I don't want to have to do this to every single icon I want to pin since they will be identical aside from the shortcut url.

    Read the article

  • System freezes during boot process

    - by slugster
    Hi everyone, i have a machine running Win7 Ultimate. It was running fine, then it just froze - all the stuff i was doing was still on the screen, but mouse and keyboard input was ignored, any animation that was happening on the screen stopped, the machine literally just froze. So i rebooted (power off button), from then on the machine will reboot, but it ultimately freezes again. The instance when this happens will vary - i have made it as far as the Windows login screen, but mostly it will do the POST, then give me the option to press F1 to continue or Del to enter BIOS settings (but of course pressing a key has no effect - it's frozen!). I have disconnected everything not necessary for the boot process, the only peripheral that remains attached is the keyboard. (even the network cable is disconnected). Prior to this the machine was operating fine. The install of Win7 is only 2 days old, and it was a fresh reinstall (i.e. not an upgrade or repair). Can anyone give me an indication of what may be wrong here? I'm not sure if this question should be here or on SuperUser, please migrate it if i have chosen the wrong board.

    Read the article

  • Linux apache developing configuration

    - by Jeffrey Vandenborne
    Recenly reinstalled my system, and came to a point where I need apache and php. I've been searching a long time, but I can't figure out how to configure apache the best way for a developer computer. The plan is simple, I want to install apache 2 + mysql server so I can develop some php website. I don't want to install lamp though, just the apache2, php5 and mysql. The problem that I've been looking an answer for is the permissions on the /var/www/ folder. I've tried making it my folder using the chown command, followed by a chmod -R 755 /var/www. Most things work then, but fwrite for example won't work, because I need to give write permissions to everyone, unless I change my global umask to 000 I'm not sure what I can do. In short: I want to install apache2, php5, mysql-server without using lamp, but configured in a way so I can open up netbeans, start a project with root in /var/www/, and run every single function without permission faults. Does anyone have experiences or workarounds to this? Extra: OS: Ubuntu 10.04 ARCH: x86_64

    Read the article

  • How is a subdomain passed to the webserver?

    - by Joshua Frank
    I know that dns resolves an address like example.com to an IP address like 11.22.33.44, but I'm a little confused about how subdomains are resolved, so that when you type http://subdomain.example.com, what actually gets passed to the server at 11.22.33.44? In other words, example.com = 11.22.33.44, but subdomain.example.com/path = ??? Are "subdomain" and "path" passed as http headers, or mapped in the url in some way, or what? Thanks in advance. Edit: If I'm understanding correctly, BloodPhilia says that subdomain.example.com actually is a different domain that in principle could resolve to a totally different IP. But if that's so, then what about hosts that have huge numbers of (what look like) subdomains, but which actually map to some path on the site. For instance, blogspot hosts millions of blogs, and they all look like this: aaa.blogspot.com bbb.blogspot.com ...millions more... yyy.blogspot.com zzz.blogspot.com Those are clearly not subdomains with their own IP's, but rather some mapping like aaa.blogspot.com -- www.blogspot.com/aaa, but how is this accomplished? What actually gets passed to the web server at blogspot.com?

    Read the article

  • System freezes during boot process

    - by slugster
    Hi everyone, i have a machine running Win7 Ultimate. It was running fine, then it just froze - all the stuff i was doing was still on the screen, but mouse and keyboard input was ignored, any animation that was happening on the screen stopped, the machine literally just froze. So i rebooted (power off button), from then on the machine will reboot, but it ultimately freezes again. The instance when this happens will vary - i have made it as far as the Windows login screen, but mostly it will do the POST, then give me the option to press F1 to continue or Del to enter BIOS settings (but of course pressing a key has no effect - it's frozen!). I have disconnected everything not necessary for the boot process, the only peripheral that remains attached is the keyboard. (even the network cable is disconnected). Prior to this the machine was operating fine. The install of Win7 is only 2 days old, and it was a fresh reinstall (i.e. not an upgrade or repair). Can anyone give me an indication of what may be wrong here? I'm not sure if this question should be here or on SuperUser, please migrate it if i have chosen the wrong board.

    Read the article

  • 502 Bad Gateway error after failed requests using Passenger

    - by Nicolas Buduroi
    I've got a staging server running nginx 1.0.5 using Rails 3.1 under Passenger 3.0.9. The problem is that a request sent just after one where there's an application error return 502 Bad Gateway. To test it I've set up a simple controller with an action that just raise a dummy exception. One request will show the Rails error message and the next one will show nginx 502 Bad Gateway error, then it goes back to the Rails application error, etc. While investigating this problem I've found out that load testing the application (which increase the number of application processes) make that issue disapear. That is until the extra processes are shutdown, then it reappear. I've tried setting the passenger_min_instances option, but doing so doesn't change anything and in this case each time an application error happen one instance is killed while after load testing all instances are kept alive. P.S.: Some people on my team told me that they've seen the 502 error even when there's no application error but I've not been able to reproduce that. Update: Just found out how to display the responses status codes using ab and most of them are 502!

    Read the article

  • When did my LaTeX files become TeX files!?

    - by andrz_001
    After transferring all my files onto a new machine, all files that were once LaTeX files (having the texnic center "T" icon) are now TeX files (having the TeX works icon --I don't remember installing that one!). But....the files are associated with the technic center! In other words, the files open with the texnic center, yet the "type of file" is "tex document". I'm using the same miktex distrubution and texnicCenter (both for windows xp) as on the old machine. Again, I don't know how/where I got this texworks on the new machine--assuming it came with windows. A question: Could these tex associations cause me any new surprises, anything unexpected? Like certain packages not working, etc? Because I can't have that!! Should I not fix it if it ain't broken!? For instance, one particular file yesterday would not produce any pdf output after compiling...after trying many things (other files compiled as they should have), I got to thinking to try it without the setspace for doublespacing!! And it worked. I have no idea why. Anyway.... Real question: How to revert to LaTeX file associations?? Rhetorical question: Would this question have better luck in ctan or stackoverflow? I guess I'll find out! Thank you wholeheartedly!

    Read the article

  • Virtual machines with failover setup

    - by kimmmo
    We have three servers and our plan is to run a number of virtual machines on them in such manner, that if one of the nodes blow up, we can either quickly or seamlessly get a spare running on another node. In addition to the normal networking, they're interconnected via dual 10Gbit NIC's, so networked raid/mirroring shouldn't be a problem. The guest VM's are mostly going to be running text mode linux, but of course it wouldn't hurt to be able to spin up a non-mission critical windows guest for running Visual Studio or checking IE compatibility of a web app. We've spent some time trying to get some magical cloud setup running using Stackops and Crowbar but it started to look like they were offering way too much and were too complicated for our needs. The next candidate, I think, is Ubuntu 11.04 server + KVM + Ganeti + Drbd, unless you can come up with a suggestion for a better solution that we have missed. Requirements: Installation should be simple or at least understandable without being in the dev team A browser interface for creating and managing VM's is a nice bonus Single node's hardware failure should cause minimal downtime for VM's that were running on that node Adding more nodes should be possible without shutting down the VM's.

    Read the article

  • Building a Web proxy to get around same-origin restrictions for collaborative Webapp based on a MEAN stack

    - by Lew Cohen
    Can anyone point to books, articles, blogs, or even applications - open-source or proprietary - that detail building a Web proxy? This specific proxy will exist to get around the same-origin restrictions that prevent, for instance, loading a given Website into an <iframe> in a Webapp. This Webapp is a collaborative application in which a group of users log in to the app's Website and can then load different Websites into this app's <iframe> and do various collaborative things (e.g., several users simultaneously browsing a Website, in synch). The Webapp itself is built on a MEAN stack (MongoDB, Express, AngularJS, and Node.js). The purpose of this proxy is not to do anonymous browsing or to bypass censorship. Information on how to build such a vehicle seems not to be readily available from my research. I've come across Glype but am not sure whether this is a feasible solution. I don't want to reinvent the wheel, so if a product is available for purchase, great. Else, we'd need to build one. The one that seems to be close is http://www.corsproxy.com. In effect, we'd like to re-create this since it evidently does what's needed. I don't care what server-side technology is used. Our app is MEAN-based, if that has any bearing. Also, the proxy has to obviously honor basic security considerations (user cookies, etc.) and eventually be scalable. So, anyone know of any sources that would detail how to build one of these? Is it even worth building if something already exists? If so, what would be a good candidate? Any other issues that should be considered with this proxy/application? Thanks a lot!

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

  • What are the possible disadvantages of enabling the "data access" server option in sys.servers for t

    - by Corp. Hicks
    We plan to change the default server options of an SQL2k5 server instance by enabling data access. The reason is that we want to run "SELECT * FROM OPENQUERY(LOCALSERVER, '...')" -like statements on the server. What are the possible disadvantages of enabling server option "data access" (alias sys.servers.is_data_access_enabled) for the local server (sys.servers.server_id = 0)? (There must be a reason for MS setting this option to disabled by default...) EDIT: it turns out that I'm not the first person to ask this question: http://sqlblogcasts.com/blogs/piotr_rodak/archive/2009/11/22/data-access-setting-on-local-server.aspx "The DATA ACCESS server option is not very well documented in my opinion - the Books On Line say it is a property of linked servers. It doesn't mention at all that you actually can have it enabled on your local server to enable OPENQUERY calls. I noticed that when you disable DATA ACCESS on a linked server, you can't query any table located on it (I tested it on my loopback server) neither using OPENQUERY nor four-part naming convention. You can still call procedures (with four-part naming) that return rowsets. Well, the interesting question is why it is disabled by default on local server - I suppose to discourage users from using OPENQUERY against it." It also seems that the author of the post (Pjotr Rodak) is a Stack Overflow user :-)

    Read the article

  • OpenVPN with MacOS X Client and same subnets in local and remote net.

    - by Daniel
    I have a homenetwork 192.168.1.0/24 with gteway 192.168.1.1 and a remote network with the same parameters. Now I want to create a OpenVPN tunnel between those networks. I have no problems with Windows, because Windows routes everything to 192.168.1.0/24 except 192.168.1.1 throught the tunnel. On MacOS X however I see the folling line in the Details window: 2010-05-10 09:13:01 WARNING: potential route subnet conflict between local LAN [192.168.1.0/255.255.255.0] and remote VPN [192.168.1.0/255.255.255.0] When I list the routes I get the following: Internet: Destination Gateway Flags Refs Use Netif Expire default 192.168.1.1 UGSc 13 3 en1 127 localhost UCS 0 0 lo0 localhost localhost UH 12 3589 lo0 169.254 link#5 UCS 0 0 en1 192.168.1 link#5 UCS 1 0 en1 192.168.1.1 0:1e:e5:f4:ec:7f UHLW 13 17 en1 1103 192.168.1.101 localhost UHS 0 0 lo0 192.168.6 192.168.6.5 UGSc 0 0 tun0 192.168.6.5 192.168.6.6 UH 1 0 tun0 My Interfaces are en1 - My local Wifi network tun0 - The tunnel interface As can be seen from the routes above there is no entry for 192.168.1.0/24 that routes the traffic through the tunnel interface. When I manually route a single IP like 192.168.1.16 over the tunnel gateway 192.168.6.6, this works. Q: How do I set up my routes in MacOS X for the same behaviour as on windows, to route everything except 192.168.1.1 through the tunnel, but leave the default gateway to be my local 192.168.1.1 ?

    Read the article

  • Does btrfs balance also defragment files?

    - by pauldoo
    When I run btrfs filesystem balance, does this implicitly defragment files? I could imagine that balance simply reallocates each file extent separately, preserving the existing fragmentation. There is an FAQ entry, 'What does "balance" do?', which is unclear on this point: btrfs filesystem balance is an operation which simply takes all of the data and metadata on the filesystem, and re-writes it in a different place on the disks, passing it through the allocator algorithm on the way. It was originally designed for multi-device filesystems, to spread data more evenly across the devices (i.e. to "balance" their usage). This is particularly useful when adding new devices to a nearly-full filesystem. Due to the way that balance works, it also has some useful side-effects: If there is a lot of allocated but unused data or metadata chunks, a balance may reclaim some of that allocated space. This is the main reason for running a balance on a single-device filesystem. On a filesystem with damaged replication (e.g. a RAID-1 FS with a dead and removed disk), it will force the FS to rebuild the missing copy of the data on one of the currently active devices, restoring the RAID-1 capability of the filesystem.

    Read the article

  • How to setup Python with Lighttpd and FastCGI (like PHP)

    - by johndir
    Running Lighttpd on Linux, I would like to be able to execute Python scripts just the way I execute PHP scripts. The goal is to be able to execute arbitrary script files stored in the WWW directory, e.g. http://www.example.com/*.py. I would not like to spawn a new Python instance (interpreter) for every request (like done in regular CGI, if I'm not mistaken), which is why I'm using FastCGI. Following Lighttpd's documentation, the following is the FastCGI part of my config file. The problem is that it always runs the /usr/local/bin/python-fcgi script for every *.py file, regardless of the content of that file: http://www.example.com/script.py [output=>] "python-fcgi: test" (regardless of the content of script.py) I'm not interested in using any framework, but simply executing individual [web] scripts. How can I make it act like PHP, executing any script in the WWW directory by requesting it's path? /etc/lighttpd/conf.d/fastcgi.conf: server.modules += ( "mod_fastcgi" ) index-file.names += ( "index.php" ) fastcgi.server = ( ".php" => ( "localhost" => ( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/run/lighttpd/php-fastcgi.sock", "max-procs" => 4, # default value "bin-environment" => ( "PHP_FCGI_CHILDREN" => "1", # default value ), "broken-scriptfilename" => "enable" ) ), ".py" => ( "python-fcgi" => ( "socket" => "/var/run/lighttpd/fastcgi.python.socket", "bin-path" => "/usr/local/bin/python-fcgi", "check-local" => "disable", "max-procs" => 1, ) ) ) /usr/local/bin/python-fcgi: #!/usr/bin/python2 def myapp(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['python-fcgi: test\n'] if __name__ == '__main__': from flup.server.fcgi import WSGIServer WSGIServer(myapp).run()

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • Apache multiple vhost logs, stored locally and sent to remote logstash

    - by benbradley
    I'm investigating centralised logging and it seems there's so many different ways this can be done. I don't want to run logstash as a log "sender", preferring to keep the web servers as lean and simple possible. So that means either using syslog, syslog-ng or the one I'm testing now, rsyslog. But I would like to have separate vhost log files on the web server, in addition to these logs being sent to a remote log collector. I've tested rsyslog using the imfile module to watch the Apache log files, but this means I have to hard-code each vhost log file into my rsyslog.conf. Not ideal as people will invariably forget when they add/remove sites on the server. The reason I'm using rsyslog's imfile is that Apache doesn't appear to let you log to file and syslog. And I want to keep vhost-specific log files on the web server. So how can I do this? Is there a way of having rsyslog produce local log files and forward the logs to a remote collector? I am prepared to change my Apache config to log to a single access/error log for all vhosts, so long as there are vhost-specific log files produced somewhere on the web server machine. I just don't want to lose any logging info if the remote log collector can't be contacted for any reason. Any comments/suggestions? Cheers, B

    Read the article

  • While loop read multiple lines from a grep

    - by Basil
    I'm writing a script in AIX 5.3 that will loop through the output of a df and check each volume against another config file. If the volume appears in the config file, it will set a flag which is needed later in the script. If my config file only has a single column and I use a for loop, this works perfectly. My problem, however, is that if I use a while read loop to populate more than one variable per line, any variables I set between the while and the done are discarded. For example, assuming the contents of /netapp/conf/ExcludeFile.conf are a bunch of lines containing two fields each: volName="myVolume" utilization=70 thresholdFlag=0 grep volName /netapp/conf/ExcludeFile.conf | while read vol threshold; do if [ $utilization -ge $threshold ] ; then thresholdFlag=1 fi done echo "$thresholdFlag" In this example, thresholdFlag will always be 0, even if the volume appears in the file and its utilization is greater than the threshold. I could have added an echo "setting thresholdFlag to 1" in there, see the echo, and it'll still echo a 0 at the end. Is there a clean way to do this? I think my while loop is being done in a subshell, and changes I make to variables in there are actually being made to local variables that are discarded after the done.

    Read the article

  • RAID6 mdraid -> LVM -> EXT4 root with GRUB2?

    - by Rotonen
    2012-03-31 Debian Wheezy daily build in VirtualBox 4.1.2, 6 disk devices. My steps to reproduce so far: Setup one partition, using the entire disk, as a physical volume for RAID, per disk Setup a single RAID6 mdraid array out of all of those Use the resulting md0 as the only physical volume for the volume group Setup your logical volumes, filesystems and mount points as you wish Install your system Both / and /boot will be in this stack. I've chosen EXT4 as my filesystem for this setup. I can get as far as GRUB2 rescue console, which can see the mdraid, the volume group and the LVM logical volumes (all named appropriately on all levels) on it, but I cannot ls the filesystem contents of any of those and I cannot boot from them. As far as I can see from the documentation the version of GRUB2 shipped there should handle all of this gracefully. http://packages.debian.org/wheezy/grub-pc (1.99-17 at the time of writing.) It is loading the ext2, raid, raid6rec, dosmbr (this one is in the list of modules once per disk) and lvm modules according to the generated grub.cfg file. Also it is defining the list of modules to be loaded twice in the generated grub.cfg file and according to quick Googling around this seems to be the norm and OK for GRUB2. How to get further by getting GRUB2 to actually be able to read the content of the filesystems and boot the system? What am I wrong about in my assumptions of functionality here? EDIT (2012-04-01) My generated grub.cfg: http://pastie.org/3708436 It seems it first makes my /usr logical volume the root and that might be source of the failure? A grub-mkconfig bug? Or is it supposed to get access to stuff from /usr before / and /boot? /boot is on / for me - no separate boot logical volume.

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) edit: similar question here What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • replacing the default console emulator under Windows XP

    - by Gilles
    How can I replace the default program providing console windows under Windows XP? I know of alternative programs, and I have a shortcut to start cmd.exe in Console2. But now I want console applications to start in Console2 rather than the default console program, even when I have no control over the program that starts the console application. (I.e. a non-console program starts consoleapp.exe, and I can't change it to start Console2 instead, but I still want the application to be started inside a new instance of Console2.) (Note that I want to replace the console itself, that is, the window in which console (i.e. text mode) applications run. And I must be able to run arbitrary, unmodified console applications: a substitute for a specific console program such as Cmd won't do me any good.) EDIT: So what I'm after is a CSRSS replacement, which leads to OT: I want to know when Microsoft is going to make a decent CSRSS replacement. Not being able to adjust the width of a "terminal" by resizing the window is a complete joke. Go download the ISE already. (It's included in Win7/2008R2.) But as far as I understand this ISE is an environment for Powershell, not a general console emulator.

    Read the article

< Previous Page | 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041  | Next Page >