Search Results

Search found 21140 results on 846 pages for 'control theory'.

Page 694/846 | < Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >

  • Apt pin and self hosted apt repo

    - by Hamish Downer
    We have our own apt/deb repository with a handful of packages where we want to control the version. Crucially this includes puppet, which can be sensitive to versions being different. I want our desktops to only get puppet from our repository, but also for people to be able to add their own PPAs, enable backports etc. The current problem we have is backports on Ubuntu Lucid. Some important lines from /etc/apt/sources.list: deb http://gb.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://gb.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://deb.example.org/apt/ubuntu/lucid/ binary/ And in /etc/apt/preferences.d/puppet: Package: puppet puppet-common Pin: release a=binary Pin-Priority: 800 Package: puppet puppet-common Pin: release a=lucid-backports Pin-Priority: -10 Currently policy says: $ sudo apt-cache policy puppet puppet: Installed: (none) Candidate: (none) Package pin: 2.7.1-1ubuntu3.6~lucid1 Version table: 2.7.1-1ubuntu3.6~lucid1 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-backports/main Packages 100 /var/lib/dpkg/status 2.6.14-1puppetlabs1 -10 500 http://deb.example.org/apt/ubuntu/lucid/ binary/ Packages 0.25.4-2ubuntu6.8 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid-updates/main Packages 500 http://security.ubuntu.com/ubuntu/ lucid-security/main Packages 0.25.4-2ubuntu6 -10 500 http://gb.archive.ubuntu.com/ubuntu/ lucid/main Packages If I use n= instead of a= then I get Package pin: (not found) I'm just plain confused at this point as to what I should use. Any help appreciated.

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed [migrated]

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Migrating email forwarding entries from DirectAdmin to Google App (Free edition)

    - by bobo
    I have a website hosted in a shared hosting account and it contains a DirectAdmin (DA) control panel. From there, I can see some email forwarding entries. I would like to migrate the email server to the Google App's, I am going to change the MX records to point to Google email server in the DA. For the existing email accounts that I see in the DA, I will re-create them in the Google App. But for those email forwarding entries, I am confused. If I keep them there, will they still work after I have changed the MX record pointing to the Google email server? If not, this means I will need to re-create them in the Google App, right? Unfortunately, Google App (Free edition) does not seem to allow email forwarding like those in DA. Unless I choose to use other editons (http://www.google.com/support/a/bin/answer.py?answer=175745). In DA, when I have created an email forwarding entry such as [email protected] - [email protected], I do not really need to create a dummy [email protected] email account and DA will still do the forwarding properly. The best I can do now, without upgrading the Google App edition, is to simply create dummy email accounts in the Google App and setup forwarding inside that email account, is this correct?

    Read the article

  • The RTL8111/8168B NIC under Linux and the r8168 driver

    - by nik
    So I've got one of the infamous R8168 Realtek ethernet NIC, which have some problems under Linux. After some research, I found out I had to use the r8168 driver for this card (and not the r8169 which still loads when nothing else is available), which I did. So now everything works fine... Sort of. My download and upload rates are more than halved compared to what I should get. When I test (with eg. speedtest) I get something like 20M (often 15M) in download and 30M in upload, but if I test under Windows (everything is otherwise identical: same ethernet cable, same connection, at the same time of the day (well 5 min apart)...), I get 50M upload/download (which is what I expect). Where can it come from? Here's some info: ~ # lspci [...] 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) ~ # modinfo r8168 filename: /lib/modules/3.2.1-gentoo-r2/net/r8168.ko version: 8.027.00-NAPI license: GPL description: RealTek RTL-8168 Gigabit Ethernet driver author: Realtek and the Linux r8168 crew <[email protected]> srcversion: 0A6E9F1D4E8E51DE4B6BEE3 alias: pci:v00001186d00004300sv00001186sd00004B10bc*sc*i* alias: pci:v000010ECd00008168sv*sd*bc*sc*i* depends: vermagic: 3.2.1-gentoo-r2 SMP mod_unload [...] ~ # mii-tool -v eth0: negotiated 100baseTx-HD, link ok product info: vendor 00:07:32, model 17 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD

    Read the article

  • cloning a kvm guest os to a vmdk file

    - by Bond
    I have a production environment where I am having 4 Guest OS running on a Ubuntu server which uses kvm. These OS are in an LVM based setup.I want these Virtual Machines to be in a vmdk format also.Where people would do experiments with these Virtual Machines so this in a vmware environment (or it can be Xen too) would be different from the kvm server.I would not have any control on that other environment so I want to give people vmdk images of these virtual machines. The production Virtual Machines will still keep running on kvm server but the VMs on which experiments would be done would be of type vmdk.(vmdk is a constraint) Here is output of lvscan ACTIVE '/dev/abcd/lvm1' [100.00 GiB] inherit ACTIVE '/dev/abcd/lvm2' [150.00 GiB] inherit ACTIVE '/dev/abcd/lvm3' [50.00 GiB] inherit ACTIVE '/dev/abcd/lvm4' [100.00 GiB] inherit I was reading man page of qemu-img and what I understand is I need to first create a qcow image file which I need to populate and then convert that to a vmdk file. Is that understanding correct? Now suppose /dev/abcd/lvm4 is the virtual machine with which I am going to start this experiment.I can shutdown the production VMs for some time to do this. So is the following way correct to go on server 1 (where kvm is running) qemu-img convert -c -f raw -O vmdk /dev/abcd/lvm4 /backup/lvm4.img or it will affect the lvm4 on kvm server 1. I do not want the VM running on original server to at all loose its any of the content but also have a vmdk file for each of the Guest OS on kvm. Before I proceed with any of the above things on the production machine I just want to make sure that I am doing the correct thing so I asking here.

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • Resources for Smartphone Security

    - by Shial
    My organization is currently working on improving our data and network security due to increasing HIPAA laws and a general need to get a better grasp on controlling our health related information. We are a non-profit working with people with developmental disabilities so we handle a lot of medical related information. One area that has been identified as a risk is our use of smartphones, specifically at this time Windows Mobile 6.1 devices from T-Mobile. We do not utilize the VPNs on the phones so there isn't any way they can access our databases or file servers (username/password for VPNs is not the domain logons). What would be exposed however is the particular user's email account since you could extract out the username/password and access the email either on the device or on our web email (Exchange 2003) which could contain HIPAA protected confidential information about clients and services and this would be an incident that would have to be reported. What resources or ideas would help us secure these devices? I'm not worried about data interception (using SSL) but more about physical theft or loss of the device. Are there websites that I just have not found with guidelines and suggestions or particualar products that would help protect us? I also don't want to limit the discussion to windows Mobile either. I myself am looking at an android 2.0 device and there is always the eventual possibility we could get pushed to enable the VPNs. I know this is a subject that likely won't have any particular correct answer and it is something we should all be aware of since there devices are sitting outside of our immediate control most of the time.

    Read the article

  • DD-WRT (WRT54G) and (THOMSON TG782) how to put them togather?

    - by FeRtoll
    Ok so let me explain, i bought WRT54G and successfully installed DD-WRT v24-sp1 (07/26/08) mini-special - build 9994. That's all ok no problems with it all normal functioning. And just to add (i don't need wireless, wireless is turned off always) What i want: ISP's router (TG782) from it's INTERNET port(out) cable "which was before in my pc" is connected to WRT54G's INTERNET port and then from WRT54G LAN port 1 to my pc. The problem: How do i connect and setup all? I have tried many times on many different ways but cant get it to work IF THE CABLE FROM TG782 IS CONNECTED TO WRT54G ON INTERNET PORT. If i connect the TG782 to Lan port 1 on WRT54G and my pc to lan port 2 then all works fine after i setup gateway and all. But i want to connect TG782 to Internet port of WRT54G because i need "Access Restrictions" and this only goes through WAN right? please correct me if i am wrong. What i have tried: This is how i have tried to setup all. The TG782 router ip is 192.168.1.1 And WRT54G ip is 192.168.1.30 so in WRT54G control panel i have setup like this: ----WAN Connection Type---- Connection Type: Automatic Configuration - DHCP STP: Disabled ----Router IP---- Local IP Address: 192.168.1.30 Subnet Mask: 255.255.255.0 Gateway: 192.168.1.1 (the TG782) ----Network Address Server Settings (DHCP)---- DHCP Type: DHCP Server Start IP Address: 192.168.1.100 Maximum DHCP Users: 6 And this wont work i probably miss something more, if anyone can help i would be thankfull. Also i have to note that i have tried to set my network adapter on pc to use the gateway of WRT54G and ip 192.168.1.102 In short: i cant get it to work normal only as a switch! Thanks for any help! -------EDIT:------- Here is an image which maybe can help: http://img27.imageshack.us/img27/4227/allin1w.jpg

    Read the article

  • FastCGI and Apache 500 error intermittently

    - by benkorn1
    I have a FastCGI (mod_fastcgi)problem. It happens every once in a while, and does not casue a complete server meltdown, just 500 errors. Here are a couple things. First I am using APC so PHP is in control of it's own processes, not FastCGI. Also, I have the webroot set as: /var/www/html And the fcgi-bin inside: /var/www/html/fcgi-bin First off here is the apache error_log: [Fri Jan 07 10:22:39 2011] [error] [client 50.16.222.82] (4)Interrupted system call: FastCGI: comm with server "/var/www/html/fcgi-bin/php.fcgi" aborted: select() failed, referer: http://www.domain.com/ I also ran strace on the 'fcgi-pm' process. Here is a snip from the trace around the time it bombs out: 21725 gettimeofday({1294420603, 14360}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6503 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 96595}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6154 23*C /var/www/html/fcgi-bin/php.fcgi - - 6483 28*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 270744}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 5741 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 311502}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6064 32*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 365598}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6179 33*C /var/www/html/fcgi-bin/php.fcgi - - 5906 59*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 454405}, NULL) = 0 I noticed that the 'select()' seems to stay the same regardless, however the read() changes its return from 46 to some other number while it is bombing out. Has anyone seen anything like this. Could this be some sort of file locking? Thanks, Ben

    Read the article

  • IIS7 - how to place application in a folder inside application web site

    - by Nir
    I have a static web site with a blog (an asp.net application), the blog is in a subdirectory of the web site so: example.com/, example.com/Something.htm, example.com/folder/somefile.htm, etc. - are all static files example.com/blog, example.com/blog/categories.aspx, example.com/blog/2011/11/09/post-name.aspx, etc. - all go to the blog app I'm upgrading the static part of the web site to a dynamic site (also an asp.net application) and the blog is incompatible with the new app (the app needs handlers and modules loaded in web.config that don't work with the blog) Also, I have to keep all the old URLs the same - so I can't move the blog to a subdomain or the new app to a folder and the blog generates links based on its folder so clever redirection tricks wouldn't work. Is there a way to place an asp.net application in a folder inside another application (either as a real or virtual folder) so that the root web.config settings don't apply to the application folder? Or some other trick I didn't think of? The system is running IIS7 on Windows Server 2008 64bit, I have full control over the server's configuration. I can't modify the blog's source code but I can edit its web.config and other configuration. I can modify the source of the new application but I can't make it compatible with the blog (most of its usefulness comes from a 3rd party library that is not compatible with the blog). The blog in an asp.net 3.5 webforms application The new root application is an asp.net 4.0 mvc application

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by dotnetdev
    I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Cisco Catalyst 4500 Policy Based Routing

    - by Logan
    In order to test a new firewall I just set up I'm trying to implement policy based routing on our core switch. I want traffic from certain vlans to be routed to the new firewall while everything else continues being routed through the old firewall. I was trying to use this guide. Everything from that guide works fine except trying to run the "ip policy route-map" command in the interface configuration mode. IOS is telling me that such a command doesn't exist. A "show ip interface vlan" command says that policy routing is disabled. Any ideas? Output of "show ver": Cisco IOS Software, Catalyst 4500 L3 Switch Software (cat4500-IPBASEK9-M), Version 12.2(53)SG, RELEASE SOFTWARE (fc3) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2009 by Cisco Systems, Inc. Compiled Thu 16-Jul-09 19:49 by prod_rel_team Image text-base: 0x10000000, data-base: 0x11D1E3CC ROM: 12.2(31r)SG2 Dagobah Revision 226, Swamp Revision 34 RTTMCB2223-1 uptime is 3 years, 22 weeks, 2 days, 19 hours, 28 minutes Uptime for this control processor is 51 weeks, 2 days, 18 hours, 2 minutes System returned to ROM by power-on System restarted at 19:22:02 UTC Tue Jul 12 2011 System image file is "bootflash:cat4500-ipbasek9-mz.122-53.sg.bin" ... cisco WS-C4510R (MPC8245) processor (revision 4) with 524288K bytes of memory. Processor board ID FOX103703W3 MPC8245 CPU at 400Mhz, Supervisor V Last reset from PowerUp 42 Virtual Ethernet interfaces 244 Gigabit Ethernet interfaces 511K bytes of non-volatile configuration memory. Configuration register is 0x2

    Read the article

  • OpenVPN, install a TAP adapter

    - by GolezTrol
    When I try to connect to my work VPN using OpenVPN, the connection fails with the message: All TAP-Win32 adapters on this system are currently in use. Many sources suggest to look in Control Panel\Network and Internet\Network Connections an enable the TAP adapter, but when I look there, there is none. Now I've run addtap.bat which is provided with OpenVPN, but I still don't get to see any TAP adapter, and logging in in VPN still fails. The output of addtap.bat is C:\Windows\system32>"C:\Program Files (x86)\OpenVPN\bin\tapinstall.exe" install "C:\Program Files (x86)\OpenVPN\driver\OemWin2k.inf" tap0801 Device node created. Install is complete when drivers are updated... Updating drivers for tap0801 from C:\Program Files (x86)\OpenVPN\driver\OemWin2k .inf. Drivers updated successfully. I've Run As Administrator both the setup of OpenVPN and addtap.bat. I've run deltapall.bat to remove any (maybe hidden) adapters. It said it removed three of them, after which I ran addtap.bat again to try to create another one. I also run OpenVPN itself as administrator. What's wrong? Running Windows 7 Home Premium on a HP Pavilion dv7 4050ed. It has worked before, but I recently had to reinstall my laptop, for which I used the restore disks I created when I just got it. Everything else seems to work fine. == UPDATE == The TAP adapter is found in Device Manager, but apparently it is disabled because it is incompatible with Windows 7 64bit. I've deïnstalled OpenVPNGui, downloaded a version that should be 64bit compatible, and installed that. Still no cigar. Then I found a tip to install OpenVPN (version 9) after installing OpenVPNGui, because that installs OpenVPN version 8. Now I got a v9 TAP driver in Device Manager, but it still doesn't work and shows up in device manager with an exclamation mark, and not at all in my network devices.

    Read the article

  • Postfix does not work after setting server hostname from plesk

    - by Michael
    I have recently set my server hostname from localhost.localdomain to xx.mydomain.com from the plesk control panel. However after doing this change postfix has stopped working, I tried restarting and regenerating the config files but to no avail. I am not familiar with postfix but I believe there is a setting to be changed in main.cf. Here are the relevant errors I receive: postfix/postfix-script: starting the Postfix mail system postfix/master: daemon started -- version 2.8.4, configuration /etc/postfix postfix/cleanup: fatal: host/service localhost/12768 not found: Name or service not known postfix/pickup: warning: maildrop/E5559996219: error writing 4FA2C996217: queue file write error postfix/master: warning: process /usr/libexec/postfix/cleanup pid 15334 exit status 1 postfix/master: warning: /usr/libexec/postfix/cleanup: bad command startup -- throttling Any ideas? EDIT Setting it back to localhost.localdomain makes it work again. The only references to localhost/12768 I can find in main.cf are: smtpd_milters = inet:localhost:12768 non_smtpd_milters = inet:localhost:12768 Should something be changed here? These two lines stay the same when I change the hostname. EDIT If I comment out the two mail filter lines (the ones shown above) and restart, postfix works with the new hostname. However this is obviously not the ideal solution...

    Read the article

  • Windows 7 Folder Redirection (GPO)

    - by Kev
    I have been fighting this issue for a day or two now, so I am looking for some insight. I am taking over admin duties in a domain of 800 users, and the previous admins really did not employ much of any GPO settings for the clients of the Domain. In each site, there is a location on the file server where "Home" folders were manually created. EX: \server\home\enduser Whenever a user got a machine, the admin would manually right-click on the "My Documents" folder and manually enter the path to the home folder. We are planning to start putting Windows 7 machines on the Network, and I am wanting to automate as much as I can, everything that was not done in the past. Since everyone has exising "Home" folders I have been fighting and trying to get Folder Redirection to work with a new Windows 7 machine (In a Test OU). I am getting all kinds of errors and I can't get the Windows 7 "Documents" folder to redirect to the users EXISTING home folders. As I stated earlier, all of the Home folders were (and still are) manually created on the File Server and are set with the following Security permissions - Domain Admins - Full Control euser (end user) - Modify (Everything but Full) Can someone point me in the right direction on the proper setting to put in the Folder Redirection GPO to get this to work with the Existing Home folders.

    Read the article

  • memcache fast-cgi php apache 2.2 windows 7 creating problems

    - by Ahmad
    hi, i am trying to run memcache, fast-cgi with apache 2.2 + php on a windows 7 machine. if i dont use memcache everything works fine. the moment i disable extension=php_memcache.dll in php.ini everything returns to normal. once i start apache, the apache logs say: [Wed Jan 12 18:19:23 2011] [notice] Apache/2.2.17 (Win32) mod_fcgid/2.3.6 configured -- resuming normal operations [Wed Jan 12 18:19:23 2011] [notice] Server built: Oct 18 2010 01:58:12 [Wed Jan 12 18:19:23 2011] [notice] Parent: Created child process 412 [Wed Jan 12 18:19:23 2011] [notice] Child 412: Child process is running [Wed Jan 12 18:19:23 2011] [notice] Child 412: Acquired the start mutex. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting 64 worker threads. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting thread to listen on port 80. and after accessing the page [the page just has echo phpinfo()]. i get this error in the error.log [Wed Jan 12 18:20:54 2011] [warn] [client 127.0.0.1] (OS 109)The pipe has been ended. : mod_fcgid: get overlap result error [Wed Jan 12 18:20:54 2011] [error] [client 127.0.0.1] Premature end of script headers: index.php i have php_memcache.dll in my ext directory and httpd.conf is like this: LoadModule fcgid_module modules/mod_fcgid.so FcgidInitialEnv PHPRC "c:/php" FcgidInitialEnv PATH "c:/php;C:/WINDOWS/system32;C:/WINDOWS;C:/WINDOWS/System32/Wbem;" FcgidInitialEnv SystemRoot "C:/Windows" FcgidInitialEnv SystemDrive "C:" FcgidInitialEnv TEMP "C:/WINDOWS/Temp" FcgidInitialEnv TMP "C:/WINDOWS/Temp" FcgidInitialEnv windir "C:/WINDOWS" FcgidIOTimeout 64 FcgidConnectTimeout 32 FcgidMaxRequestsPerProcess 500 <Files ~ "\.php$>" AddHandler fcgid-script .php FcgidWrapper "c:/php/php-cgi.exe" .php </Files> so the problem has to be related to memcache coz if i disable it, fast-cgi seems to be working fine. any possible reasons for this?? the memcache service is running.. i can check it through control panel-services

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • Remote Desktop *from* Windows 2008 R2 Server

    - by freefaller
    Summary: how do I create an RDC connection from a Windows 2008 server to another server? Our client will only allow us to connect to their server via a static IP address (which is fair enough), but unfortunately as we're a very small company we don't have one in the office. As a work around, we had the connection working through our old Windows 2003 server (dynamic-cloud from 1and1). .. however we have just rebuilt the server to run under Windows 2008 R2 (don't ask, but it was necessary), and now I simply cannot get the connection working. I have added an "Outbound Rule" to Windows Firewall with Advanced Security (TCP, All local ports, 3389 remote port - I have also tried the other way around). I have added a packet filter IP security rule with the same details. The 1and1 firewall rules (through their online control panel) allows for 3389 TCP and UDP. But it is simply not connecting (yes, the server is definitely on and able to accept connections) with the general error of... Remote Desktop can’t connect to the remote computer for one of these reasons: 1) Remote access to the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network Is there anything obvious I've missed - or something I can use to find out where the request is being blocked? The new server is using the exact same IP address as before, so I don't believe that would be an issue. Unless it's trying to use an IPv6 address rather than the old IPv4 address that it was before? I apologise that I am not a network person by trade, but I know more than anybody else in my office!!

    Read the article

  • Using git through cygwin on windows 8

    - by 9point6
    I've got a windows 8 dev preview (not sure if it's relevant, but I never had this hassle on w7) machine and I'm trying to clone a git repo from github. The problem is that my ~/.ssh/id_rsa has 440 permissions and it needs to be 400. I've tried chmodding it but the any changes on the user permissions gets reflected in the group permissions (i.e. chmod 600 results in 660, etc). This appears to be constant throughout any file in the whole filesystem. I've tried messing with the ACLs but to no avail (full control on my user and deny everyone resulted in 000) here's a few outputs to help: $ git clone [removed] Cloning into [removed]... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0660 for '/home/john/.ssh/id_rsa' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: /home/john/.ssh/id_rsa Permission denied (publickey). fatal: The remote end hung up unexpectedly $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ chmod -v 400 ~/.ssh/id_rsa mode of `/home/john/.ssh/id_rsa' changed from 0440 (r--r-----) to 0400 (r--------) $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ set | grep CYGWIN CYGWIN='sbmntsec ntsec server ntea' I realize I could use msysgit or something, but I'd prefer to be able to do everything from a single terminal Edit: Msysgit doesn't work either for the same reasons

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • How to uninstall files and software from a Solid State Drive (SSD)

    - by jasondavis
    I am using a SSD drive just as a main Operation system and software drive. I have a regular spinning disk for data files. I have read that files are not REALLY deleted from a SSD drive when you "delete" them. So here is my situation and hopefully I can get some good advice. Let's say I have Windows 7 Pro x64bit installed on my 80gb Intel SSD. I then have Adobe Photoshop CS4 installed along with 50 other programs installed onto this SSD. I then decide I am done with Adobe Phjotoshop CS4 and want to remove it. I then decide I want to install another version of Adobe Photoshop and a few other software titles. Would I just do the usual, add/remove software from the windows control panel. Or if the software being removed has an "Uninstall" program to run, then run it and uninstall the software? I realize that all this WILL remove the software from Windows 7 but I am wanting to know if there is additional steps that should be taken since it is a solid state drive (SSD)?

    Read the article

  • SQL server agent job to execute SSIS package fails, package succeds if run manually

    - by growse
    I've got a SSIS package installed on a SQL server (SQL Server 2012). It's fairly simple and just fetches data from a remote data source and adds it into a local table. The remote connection string is using SQL server authentication, while the local connection is using Windows auth. The remote connection password is protected, and the package was imported setting the protection level to Rely on server storage and roles for access control. If I run the SSIS package manually, it works. If I run it from the command line using dtexec, it works. If I use runas to switch to the domain account that the SQL server agent is running under, and then run the package using dtexec, it works. If I create a SQL Agent job with a single step to run the package, it fails, providing very little detail as to what's going on. I'm guessing it's not able to get the password to log into the remote SQL server, because it fails very quickly. Also, if I tick 'log to table' and view the resulting file, I get the following: Description: ADO NET Source has failed to acquire the connection {0D8F2CD4-A763-4AEB-8B52-B8FAE0621ED3} with the following error message: "Login failed for user 'username'.". If I try to add the password in the connection string manually under data sources in the job step dialog, it refuses to save it, always seeming to remove the 'password' bit of the connection string. I thought that SQL server agent jobs always ran under the context of the account which the SQL server agent is running under. This account is a sysadmin on the local SQL server, and the package works using dtexec under that account, so why would it fail when trying to run as an agent job?

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • Setup git repository on gentoo server using gitosis & ssh

    - by ikso
    I installed git and gitosis as described here in this guide Here are the steps I took: Server: Gentoo Client: MAC OS X 1) git install emerge dev-util/git 2) gitosis install cd ~/src git clone git://eagain.net/gitosis.git cd gitosis python setup.py install 3) added git user adduser --system --shell /bin/sh --comment 'git version control' --no-user-group --home-dir /home/git git In /etc/shadow now: git:!:14665:::::: 4) On local computer (Mac OS X) (local login is ipx, server login is expert) ssh-keygen -t dsa got 2 files: ~/.ssh/id_dsa.pub ~/.ssh/id_dsa 5) Copied id_dsa.pub onto server ~/.ssh/id_dsa.pub Added content from file ~/.ssh/id_dsa.pub into file ~/.ssh/authorized_keys cp ~/.ssh/id_dsa.pub /tmp/id_dsa.pub sudo -H -u git gitosis-init < /tmp/id_rsa.pub sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update 6) Added 2 params to /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes Full sshd_config: Protocol 2 RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no UsePAM yes PrintMotd no PrintLastLog no Subsystem sftp /usr/lib64/misc/sftp-server 7) Local settings in file ~/.ssh/config: Host myserver.com.ua User expert Port 22 IdentityFile ~/.ssh/id_dsa 8) Tested: ssh [email protected] Done! 9) Next step. There I have problem git clone [email protected]:gitosis-admin.git cd gitosis-admin SSH asked password for user git. Why ssh should allow me to login as user git? The git user doesn't have a password. The ssh key I created is for the user expert. How this should work? Do I have to add some params to sshd_config?

    Read the article

< Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >