Search Results

Search found 5318 results on 213 pages for 'managed c'.

Page 141/213 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • How can I make Cygwin open a new window each time I use a Window 7 keyboard shortcut?

    - by Michael Gundlach
    [Update: The short answer is, if an application in the 3rd thing on your taskbar, press WindowsKey+Shift+3 to open a new instance. Hooray!] I have Chrome and Cygwin on my taskbar. Chrome's shortcut is Ctrl-Alt-C (as set through right clicking the icon and putting Ctrl-Alt-C into Chrome - Properties - Shortcut Key). Cygwin's shortcut is Ctrl-Alt-T. When I press Ctrl-Alt-C, I get a new Chrome window. Great! It's as if I had shift-clicked on the Chrome icon. When I press Ctrl-Alt-T, I get a Cygwin window the first time, but after that I just get focused on the Cygwin window. As if I had simply clicked on the Cygwin icon and not shift-clicked. As if Cygwin wasn't able to have more than one instance open. I can still shift-click the icon to get more instances. I've tried with different keys than Ctrl-Alt-T and got the same behavior. Strangely, I've twice managed to get it into a state (via just clearing and setting the shortcut key over and over) where a shortcut WOULD open multiple instances -- but it was Ctrl-Alt-G both times, which doesn't make sense to my brain which has been trained to use Ctrl-Alt-T for years. Ctrl-Alt-G usually behaves just as poorly as Ctrl-Alt-T, except for the two times when magically it started behaving properly. So I'm thinking this is a Windows 7 bug (which has existed since Windows XP at least), but I'm hoping someone knows something I don't :)

    Read the article

  • Correct way to set up office network - 8 workstations, a file server and a staging server

    - by naunu
    Our office had this old school windows 2003 domain setup, our server caught fire, and now we are looking to do it right from scratch. Here is what we need: 5 PC and 3 Mac workstations for web development, they will each have WAMP/MAMP setup on them, managed by their developers. We will have a file server for assets, and a LAMP server with an external IP for staging. Here is what we have to work with: 5 IP addresses, brand new PC file server with windows 2008 SE, D-Link DSS-16+ 16 port switch, belkin 5 port wireless router, cable modem with 4 ports. How I have it set up now (this is a temporary makeshift setup): Cable modem = LAMP server, wireless router Wireless router = Switch = All of the workstations and file server (setup as a workgroup). We have noticed our internet is very slow with us all plugged in to the switch, and the switch plugged in to the router. I am not positive, but I think it is because our router does not have NAT. We are also having problems with the MACs connection to the network drive - it keeps disconnecting. I want this done right, and we have a ~$600 budget to buy anything else we need. Does anybody have any advice for me? Should I set up a domain or workgroup?

    Read the article

  • Use Mac OS X Server As Development Environment

    - by macinjosh
    I've installed Mac OS X Server 10.6.3 on my laptop to use as my normal OS. I do a lot of web development and thought it would be handy to run OS X Server so I could more easily manage my local development environment (Apache Virtual Hosts, Hostnames for each local site, etc). I'm really enjoying the new setup except for one problem. DNS. My ideal situation would be to add a site (some-site.local) in the Web Service and then go to the DNS Service and add a primary record for the new site. I actually got this working at one point but after a reboot it stopped working! The records look the same as they did before the reboot but the site doesn't come up in Safari. Here is a list of my needs: Need to be able to add new domains at a whim Domains always map to a site on the same box's Web Service Local & External IPs often change It would nice if it worked on any network (i.e. WiFi at the airport or coffee shop) Sites only need to be accessible locally Configuration should stay put even after rebooting I've done some googling and used this as a bit of guide. In the past I've used MAMP and then just a local Apache/PHP/MySQL install with a manually managed hosts file. I'd rather not go back.

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • Do registry issues with Win7 persist through a recovery from a system image?

    - by user59089
    So I need a bit of advice, please; here's my situation: I have 1) a system image on an a brand new external 1 TB SATA drive, that I managed to successfully capture before my 2) primary system drive went down. I realize this is a fairly simple matter of buying a new primary drive and performing the recovery to the fresh disk...however, the issue is that I believe Win7 was also having some significant issues of its own--basically, Update unable to install updates, and Backup continually ditching the auto backup schedule. I'd been trying to address those issues when my system was still working, but it's been so fruitless, I'm convinced a Win7 re-install would be best, and now I'm concerned that if I was in fact having what I believe are likely registry-related issues before, that these will persist through a recovery--would that likely be correct? I'm mainly worried about recovering my files, so if I did a full recovery from the image, should I be able to then access my individual files, and copy them manually to an external drive, so I can then do a full re-install of Win7? Sory if this seems obvious, but I've never done a recovery before and just trying to make sure there's no red flags with what I have in mind...

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

  • remove tasksel lamp-server

    - by RickyA
    I was tricked in running "sudo tasksel install lamp-server" on the wrong server (UBUNTU 10.4). Now I am stuck with a system where apache won't start because of a "Address already in use: make_sock: could not bind to address 0.0.0.0:80" error. I now want to remove this task, but documentation on that crappy tasksel says you cant use it to uninstall stuff (!!!???). My question is where can I see what packages it installed, and how can I get rid of a selection of them (apt-get?). I want to keep apache, but mysql, php and the other stuff can go... [edit] I managed to get rid of most of the lamp stack. (/var/logs/dpkg.log is usefull for recently installed packages). However it did something in a configuration somewhere, and now two apache intstances start at boottime. Killing the first one and starting a new one gets rid of the "could not bind at adress..." error. Does anyone know where the startup of the first one is configured?

    Read the article

  • What command should be used to connect via SSH through squid proxy?

    - by Raul Cardoso
    I have set up a Squid HTTP Proxy (in centOS) and intended to use it also for ssh connections. Managed to configure putty (in a windows client) to use this proxy while connecting by ssh. Confirmed in the "target host" that the ssh connection was coming from Proxy server ip instead of the windows client IP. Used: targethost: 22 for ssh proxyserv: 3128 for proxy (along with proxy credentials) I'm now having problems connecting to the "target host" using Ubuntu and the same proxy server. I have tried the following: me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 Enter proxy authentication password for test@proxyserv: SSH-2.0-OpenSSH_6.2p2 Ubuntu-6 It hangs in last line, expecting some input. All attempts resulted in a "Protocol mismatch." error. Putty successfully connects to the http proxy and sends credentials, showing me ssh login right away. - How to do (with commands in Ubuntu) the same putty does? - Is there any other way than connect-proxy command to do this? Edit: Also tried the following with same result ("Protocol mismatch") me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 ssh -l myshel_login Thanks in advance Edit: Solution details (thanks to NickW pointing the right way) installed corkscrew and added to ssh_config Host targethost ProxyCommand corkscrew proxyserv 3128 %h %p /etc/ssh/proxypass created proxypass file login:password Restarted ssh and used a simple ssh command ssh mylogin@targethost (ssh password was asked as usual)

    Read the article

  • iis 7.5 - WFF and ARR farm management

    - by smackaysmith
    We have two test web farms (IIS 7.5). The Florida web farm has two ARR servers and two content servers. The ARR servers have WFF and NLB installed. The ARR setup uses a shared config located on a file share. The content servers do not have WFF installed. There is one web farm, and it's managed on an ARR server. The Illinois web farm also has two ARR servers and two content servers. ARR servers have WFF and NLB installed, and they use a shared config located on a share. One of the content servers has WFF installed, which makes it the controller; it's also the primary content server. Apparently, Illinois isn't properly configured. From what we've pieced together from various IIS.net articles and this post (http://ruslany.net/2010/07/web-farm-framework-2-0-overview/), the controller should be one of the ARR servers (like our Florida setup). The thing is Florida's controller doesn't have a Primary server nor can you set one of the content servers as Primary. It doesn't have the management piece showing the Trace messages when you click the Servers node (from iis console, Server Farms/FLFarm/Servers http://ruslany.net/wp-content/uploads/2010/07/WebFarm8.png). That management piece does exist in the Illinois farm, but that's a bad configuration. What are we missing that our Florida configuration doesn't have the Primary and Secondary content servers, and the management piece? I have looked for IIS role differences, but there are none.

    Read the article

  • undelete big files - mission impossible?

    - by johnrembo
    Hi, I've accidentaly deleted outlook.pst (6.7GB) file, while there was only 400MB free space left on primary NTFS partition (winxp). I've tried several recovery tools to get this file back. "Ontrack Easy Recovery Pro" found 0 pst files (complete scan mode), while "Recover My Files" in sector scan mode found 5 pst's, but 4 of them of sizes from 3 to 28 KB, while the 5th one - 1Gb. I've managed to succesfuly recover 1Gb pst file, which was 1 year old copy (the one used after the latest windows reinstall). Now, I'm frustrated and confused Why 1 year old file was succesfuly recovered if there were only 400MB left on primary partition? Where's 6.7GB file gone? I did some reading (i.e. here), and it seems that there's almost no probability to retrieve the file I'm looking for, but wait - none of recovery tools i've used found zero-sized pst file, moreover - if due to fragmentation a file might be corrupted - we could use scanpst.exe to fix some errors and survive with 10 or 100 emails missing - whatever. Could you please recommend some more sophisticated recovery tools for this particular task? Appretiate your help - thanks in advance

    Read the article

  • How to get multipath working for Ubuntu Server 12.04

    - by mlampi
    I'm working on a project which aims to make use of Ubuntu servers running on enterprise class hardware. In our case that means IBM HS23E blade servers, QLogic 4GB fibre channel extension cards and quite old IBM DS4500 disk array with two controllers. At the moment we have fibre channel as only boot option and Ubuntu Server 12.04 installed just fine and is able to boot without multipath. I'm not a linux professional myself but in our team we have people who will understand the technical stuff. Don't let my post confuse :) The current situation is that we have only one fibre channel connection to a single disk array controller. Real life case would be of course quite different. At minimum we should have two fibre channel ports connected to two different switches and two different controllers. However, we have no idea how to set up multipath tool. Is the DM-MPIO the right software? At minimum we should be able to boot when multiple connections are available and achieve fault tolerance when any of them should be down. Since the disk array is not the latest hardware, I managed to find RDAC driver sources only for 2.6.x kernel. And we have 3.2.x. Another issue is to build a multipath.conf. The said driver sources are from IBM support and the QLogic drivers provided to Ubuntu installer are from Ubuntu site. It seems that RHEL and SLES would have near out of the box support but that is not an option for our project. Actual questions: - What is the recommended software tool for multipath for Ubuntu Server 12.04? - Is there available pre-made configurations or templates? Does it require disk array / controller specific settings or do a more generic config work? - Do you have expriences on similar setup and like to share the knowledge? I'll provide you with any additional information you might require. Thanks in advance.

    Read the article

  • Hard Drive benchmark values show write very very slow

    - by John
    I recently started to have issues with my laptop being very slow. I ran a hard drive benchmarking tool (by ATTO) that showed that the write speed was very very slow on my boot drive. I ran the same benchmark on my usb drive and it was 650 times faster than my boot drive when it came to writing. Reading is very fast/normal on both. I swapped out an identical drive and ran the same benchmark. This time the drive showed proper write speed. Thinking that I had a hard drive going bad I cloned the old one onto the new one. I managed to clone the problem too. Anyone have any ideas on what in WinXP SP3 might be causing the write issues? I am on a corporate network and we have commercial anti-virus software installed. (AVG I think) I regularly run defraggler and have about 40 gig free on a 100 gig drive. The machine has 4 gigs of memory. Any ideas? TIA J

    Read the article

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

  • Transpose matrix-style table to 3 columns in Excel

    - by polarbear2k
    I have a matrix-style table in excel where B1:Z1 are column headings and A2:A99 are row headings. I would like to convert this table to a 3 column table (column heading, row heading, cell value). It does not matter in what order the new table is. A B C D A B C A B C 1 H1 H2 H3 1 H1 R1 V1 1 H1 R1 V1 2 R1 V1 V2 V3 => 2 H1 R2 V4 or 2 H2 R1 V2 3 R2 V4 V5 V6 3 H1 R3 V7 3 H3 R1 V3 4 R3 V7 V8 V9 4 H2 R1 V2 4 H1 R2 V4 5 H2 R2 V5 5 H2 R2 V5 6 H2 R3 V8 6 H3 R2 V6 7 H3 R1 V3 7 H1 R3 V7 8 H3 R2 V6 8 H2 R3 V8 9 H3 R3 V9 9 H3 R3 V8 I've been playing around with the OFFSET function to create the whole table but I feel like a combination of TRANSPOSE and V/HLOOKUP is required. Thanks EDIT I have managed to come up with the correct formulas. If the data is in Sheet1 like in my example above, the formulas go in Sheet2: [A1] =IF(ROW() <= COUNTA(Sheet1!$B$1:$Z$1)*COUNTA(Sheet1!$A$2:$A$99), OFFSET(Sheet1!$A$1,0,IF(MOD(ROW(),COUNTA(Sheet1!$B$1:$Z$1))=0,COUNTA(Sheet1!$B$1:$Z$1),MOD(ROW(),COUNTA(Sheet1!$B$1:$Z$1)))),"") [B1] =IF(ROW() <= COUNTA(Sheet1!$B$1:$Z$1)*COUNTA(Sheet1!$A$2:$A$99),OFFSET(Sheet1!$A$1,IF(MOD(ROW(),COUNTA(Sheet1!$A$2:$A$99))=0,COUNTA(Sheet1!$A$2:$A$99),MOD(ROW(),COUNTA(Sheet1!$A$2:$A$99))),0),"") [C1] =IF(ROW() <= COUNTA(Sheet1!$B$1:$Z$1)*COUNTA(Sheet1!$A$2:$A$99),OFFSET(Sheet1!$A$1,IF(MOD(ROW(),COUNTA(Sheet1!$A$2:$A$99))=0,COUNTA(Sheet1!$A$2:$A$99),MOD(ROW(),COUNTA(Sheet1!$A$2:$A$99))),IF(MOD(ROW(),COUNTA(Sheet1!$B$1:$Z$1))=0,COUNTA(Sheet1!$B$1:$Z$1),MOD(ROW(),COUNTA(Sheet1!$B$1:$Z$1)))),"") The formulas are limited to B1:Z1 for the headings and A2:A99 for the rows (these can be increased to their maximums if required). The COUNTA() formula returns the number of cells that actually have values, which limits the number of rows returned to headings*rows. Otherwise the formulas would could go on for infinity because of the MOD function.

    Read the article

  • Linux: How do I remove bootchart from the boot process?

    - by java.is.for.desktop
    Hello, everyone! I have OpenSUSE 11.2. I removed bootchart and forgot to run mkinitrd. Now, right at the start of the boot process, I get boot/93-bootchart.sh: line 17: 462 Terminated stopinitrd 5 I Can't find any 93-bootchart.sh anywhere. Failsafe boot mode doesn't help. Earlier I got an error message about non existing /sbin/bootchartd, but I just copied /bin/cat to /sbin/bootchartd using a GParted boot disk. I tried to use chroot with an OpenSUSE boot disk, but mkinitrd can't find the root device, which is there actually (/dev/sda5). How can I make my system boot again? EDIT Ok, now I managed to re-install the bootchart rpm, using OpenSUSE boot disk and chmod. The system starts again. But that annoying bootchart is still there. I will not try again to remove it. First I will try to figure out, how to disable it during the boot process. Hopefully with your help ;)

    Read the article

  • Windows Vista - Can Boot into Safe Mode Command Prompt, Cannot Access Flash Drive

    - by Adam
    Hey there, Got a laptop dropped into my lap. Has Vista installed. Normal boot never gets past the login screen. Safe mode will get past, but just barely. I can try and start the task manager at that point, but it never opens. Only reliable way to do anything I've got is the safe mode command prompt... Boots up and logs in fine. I can't see anything noticably wrong via regedit, but it's been a long time since I've had to fix up a Windows box... not sure that I would. Problem I'm having, is that I want to run ComboFix/etc, but have no way to get them on there... When I pop a flash drive in, it seems to mount it (flash drive flickers as normal) but it never seems to be mounted... I cannot access it through any drive letter on the command line. I managed to start the device manager (devmgmt.msc) and the flash drive was recognized and listed... Any ideas on how to get this thing going again? (Short of a reinstall.) (It has no CD drive, either, so burning files to CD would not be easy...) Thanks! Adam

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    Hello, I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • How can I wipe my iPod classic and fix any bad sectors on the hard drive without killing it?

    - by Sam Meldrum
    My iPod never finishes syncing and only syncs audio, not pictures or video - any ideas as to how I can fix it? My iPod classic 160GB worked well for a couple of years. I used to sync a lot of photos at full resolution to it, but this recently stopped working after I moved to Windows 7. iTunes is on latest version - 9.1.1.12 iPod software is up to date - 1.1.2 Windows 7 is fully up to date and patched The symptoms are that the iPod will start to sync, all audio (music and podcasts will sync successfully) but the syncing will then just appear to continue - itunes message: Syncing iPod. Do not Disconnect. This sync never completes - I have left it trying for days. I have tried resetting the iPod using the Restore button, whereupon it restarts sync from default options and again will sync audio, but nothing else. I suspect that something has gone wrong on the hard-drive - either a bad sector or some corrupt data. Is there a process I can go through to fix this? E.g. SpinRite or a format? If so how do I go about formatting an iPod and will it be recognised as an iPod after format and work as normal? Any advice on what to try next much appreciated? Update I have eliminated problems with the files, PC or iTunes as they sync fine to other iPods. I have also eliminated the cable by trying different cables which work with other iPods. What I'd really like to know is if there is any way to more fundamentally wipe the iPod safely, attempt to repair any bad sectors on the hard drive and then start from scratch. Anyone ever managed this?

    Read the article

  • Running WordPress and Ghost on Apache with mod_proxy

    - by Jack Perry
    I currently have three WordPress sites hosted on Apache with virtual host files to direct the right domain to the right DocumentRoot. Ghost (node.js) just came out and I've wanted to tinker with it and just play around on one of my spare domains. I'm not really interested in moving over to nginx so I'm trying to get Ghost working on Apache via mod_proxy. I've managed to get Ghost working on my spare domain, but I think there's a problem with my virtual host files, as all of my other domains start pointing to Ghost as well. Here are two virtual host files, one for my main WordPress site that works fine, and the second for Ghost. Domains removed and replaced with DOMAIN and DOMAIN2. DOMAIN <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName DOMAIN.com ServerAlias www.DOMAIN.com DocumentRoot /var/www/DOMAIN.com/public_html <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/DOMAIN.com/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> DOMAIN2 <VirtualHost IP:80> ServerAdmin EMAIL ServerName DOMAIN2.com ServerAlias www.DOMAIN2.com ProxyPreserveHost on ProxyPass / http://IP:2368/ </VirtualHost> I get the feeling I'm not working with virtual hosts or mod_proxy right, and Google-fu has let me down after many suggested attempts. Any ideas? Thanks!

    Read the article

  • Managing ip6tables (or an alternative firewall) remotely

    - by Matthew Iselin
    I'm working with IPv6 and have run into an issue configuring ip6tables on our main router in order to control what can come into the network. A default DROP rule in the FORWARD section has worked well (obviously leaving ESTABLISHED,RELATED as ACCEPT) to keep internal clients' open ports from being accessed. However, running an ip6tables command for every little change is unwieldy. Whilst we are able to continue creating rules manually, I'm wondering if there's some sort of management interface we could use to create the rules quickly and easily. We're looking to be able to save time working on our firewall as well as providing a simple method for modifying rules for those who will eventually replace us. I know webmin (heavily locked down on our network, naturally) has support for modifying iptables rules, but seemingly no support for ip6tables. Something similar would be fantastic. Alternatively, suggestions for a firewall solution apart from iptables/ip6tables which can be managed remotely wouldn't be out of order. A web interface for management is certainly preferable, even if it is just a wrapper with shiny buttons over the raw config files.

    Read the article

  • Surprising corruption and never-ending fsck after resizing a filesystem.

    - by Steve Kemp
    System in question has Debian Lenny installed, running a 2.65.27.38 kernel. System has 16Gb memory, and 8x1Tb drives running behind a 3Ware RAID card. The storage is managed via LVM. Short version: Running a KVM guest which had 1.7Tb storage allocated to it. The guest was reaching a full-disk. So we decided to resize the disk that it was running upon We're pretty familiar with LVM, and KVM, so we figured this would be a painless operation: Stop the KVM guest. Extend the size of the LVM partition: "lvextend -L+500Gb ..." Check the filesystem : "e2fsck -f /dev/mapper/..." Resize the filesystem: "resize2fs /dev/mapper/" Start the guest. The guest booted successfully, and running "df" showed the extra space, however a short time later the system decided to remount the filesystem read-only, without any explicit indication of error. Being paranoid we shut the guest down and ran the filesystem check again, given the new size of the filesystem we expected this to take a while, however it has now been running for 24 hours and there is no indication of how long it will take. Using strace I can see the fsck is "doing stuff", similarly running "vmstat 1" I can see that there are a lot of block input/output operations occurring. So now my question is threefold: Has anybody come across a similar situation? Generally we've done this kind of resize in the past with zero issues. What is the most likely cause? (3Ware card shows the RAID arrays of the backing stores as being A-OK, the host system hasn't rebooted and nothing in dmesg looks important/unusual) Ignoring brtfs + ext3 (not mature enough to trust) should we make our larger partitions in a different filesystem in the future to avoid either this corruption (whatever the cause) or reduce the fsck time? xfs seems like the obvious candidate?

    Read the article

  • How do I compile DarWINE in PowerPC Mac OS X 10.4.11...?

    - by Craig W. Davis
    So far I've tried using MacPorts which gives me this error: /Error: Cannot install wine for the arch(s) 'powerpc' because Error: its dependency pkgconfig is only installed for the archs 'i386 ppc'. Error: Unable to execute port: architecture mismatch To report a bug, etc... (I'm not allowed to post two links due to being a new poster...)./ I've also tried using the build script I found in the DarWINE 0.9.12 SDK download that I found on the DarWINE SourceForge.net Project Page... I've also tried the build script that I found at http://code.google.com/p/osxwinebuilder/#Building_Wine_via_the_script... None of these attempts to build DarWINE have actually worked. Whenever I build using the DarWINE build script I run it as follows: /1. I decompress the WINE tarball into ~/Downloads/WINE 2. I cd into ~/Downloads/DarWINE. 3. I run ./winemaker ~/Downloads/WINE/wine-1.2.2 or ./winemaker ~/Downloads/WINE/wine-1.2-rc2 (the reason for trying WINE 1.2-rc2 is that some people managed to get it to build on PowerPC Macs running 10.5.8...)./ I made sure to install Xcode Tools 2.5 & all the SDKs too... The net result is either a syntax type error resulting from trying to run the checked out Google Code DarWINE build script or a bunch of make errors when trying to run the official DarWINE build script that I forcefully extracted from the DarWINE 0.9.12 SDK .dmg file by using Pacifist. I trying to build DarWINE on mid-April 2006 1.42 GHz eMac with DL SuperDrive with Bluetooth 2.0+EDR with 2 GBs of RAM running 10.4.11 as I mentioned earlier... (it came with 10.4.4 on the Mac's Restore DVD-ROM that I ordered from 1-800-SOS-APPL & coconutIdentityCard told me it was made on April 12th 2006 & I know that's right because when I reinstalled Mac OS X 10.4.4 it displayed that it was registered/previously owned by a Hawaiian school...): /make[1]: winegcc: Command not found make[1]: * [main.o] Error 127 make: * [dlls/acledit] Error 2./

    Read the article

  • Silent install FirePro v4900 Driver on Windows Embedded 7 Standard

    - by Birgit_B
    I'm trying to install the Drivers for a FirePro v4900 on a Windows Embedded 7 Standard 64bit OS. I want the system to be as small as possible, so i would rather not install the whole catalyst control center, but only the necessary drivers. Because the installation should be accomplished absolutely unattended, the installation process of the FirePro-Driver should also be done without any user interaction. I see two possible solutions for the Problem: Install only the Drivers: Is it possible to solely install the necessary drivers? How would i achieve that? This solution would be the preferred one, because of the smaller footprint. Silent custom install the provided "FirePro_8.911.3.3_VistaWin7_X32X64_135673.exe" (found at ATI FirePro™ Driver). Is there a way, to do that? Thank you in advance for your support! Update: I managed to accomplish a silent installation. I extracted the contents of the above mention installer-file and ran \$_OUTDIR\Bin64\Setup.exe -Install. (There are some other Parameters, just run Setup.exe /?). But i couldn't achieve to just install the drivers without the Cataclyst Control Center, and it seams the Control Center has some unfulfilled dependencies and so it crashes...

    Read the article

  • Connecting a network printer via a Thecus N2100 - works in Vista, not in Windows 7

    - by Jon Skeet
    I have a Lexmark E250d printer attached to a Thecus N2100 NAS. On Windows Vista I've managed to configure this using an "Internet" printer port with the URL of http://thecus:631/printers/usb-printer. I can add a printer in a similar way in Windows 7, but it never manages to print the test page. If I go to "Configure Port" in Vista, it just has "Security Options" - on Windows 7 it's asking about Raw mode vs LPR mode etc. On Vista I'm using an E250d-specific driver from Lexmark; on Windows 7 there's a Microsoft E250d driver, or a Universal PCL XL driver from Lexmark... I wouldn't expect this different to be related to the problem, but I thought I'd mention it anyway. (Lexmark doesn't have a Windows 7 E250d-specific driver as far as I can see.) Any suggestions? I was thinking of upgrading my main laptop from Vista to Windows 7, but I'd really like to get this sorted first... EDIT: If I connect to http://thecus:631/printers/usb-printer via Chrome while capturing with Wireshark, I get this response: HTTP/1.1 200 OK Date: Wed, 06 Jan 2010 16:47:23 GMT Connection: Keep-Alive Keep-Alive: timeout=60 Content-Language: C Transfer-Encoding: chunked Content-Type: text/html;charset=iso-8859-1 0 No idea what that's meant to be doing... EDIT: On further consultation, this would appear to be the Internet Printing Protocol which is layered on HTTP. Printing a test page successfully from Vista posts to that URL. Will attempt the same on Windows 7...

    Read the article

  • Pre-startup segmentation fault with ptrace

    - by sfink
    I have somehow managed to mangle my computer so that any time I attempt to use something that uses ptrace to trace another process (eg strace, gdb), I get an immediate segmentation fault. For example: # strace /bin/true execve("/bin/true", ["/bin/true"], [/* 27 vars */]) = 0 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ or with gdb: # gdb /bin/true GNU gdb Fedora (6.8-27.el5) Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu"... (no debugging symbols found) (gdb) run Starting program: /bin/true Program terminated with signal SIGSEGV, Segmentation fault. The program no longer exists. You can't do that without a process to debug. rpm -V comes up clean on strace, gdb, and glibc. I do not have any LD_* variables set, and PATH has nothing special in it.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >