Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 1299/3479 | < Previous Page | 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306  | Next Page >

  • Reverse Proxy Wordpress with Lighttpd

    - by Jonah
    I am deploying an application and a Wordpress installation on AWS. I have Wordpress set up under Apache on an EC2, and my application under Lighttpd, and I want to reverse-proxy Wordpress through the application node. This works fine, I just set up the reverse proxy in Lighttpd as so: $HTTP["url"] =~ "^/blog" { proxy.server = ( "/blog" => ( "blog" => ( "host" => "123.456.789.123", "port" => 80 )) ) } url.rewrite-once = ( "^(.*?)$" => "/index.php/$1" ) However, the issue is in the rewrite. When I enable rewriting, it catches it before the reverse proxy, and routes to index.php on the application server. I need it to not rewrite if it's going to the blog. I tried various regex matches and other configurations, but I haven't been able to get it to support rewriting and proxying at the same time. How can this be done?

    Read the article

  • How to redirect from one web directory to another with Apache

    - by RN
    Apache2 Plesk 9.x I have a website www.example.com and my blog is on www.example.com/blog I have no content on www.example.com as of now So I want all requests for example.com to be redirected to www.example.com/blog How should I do that ? Is this something I can do in Apache? I am using the GoDaddy DNS server. Not sure if it matters- but I have multiple domains hosted n the same server. And I am using Plesk to manage my virtual hosts.

    Read the article

  • IIS 7 with verisign certificate, invalid certificate returned

    - by bh213
    We have IIS7 on windows 2008 and we installed verisign certificate and bound it to https. Certificate seems fine. Chain: mysite.com - not expired VeriSign international server CA class 3 - not expired Verisign Class 3 Public primary certification Authority - not expired Yet when I use verisign online validation, I get that second certificate is expired. https://knowledge.verisign.com/support/ssl-certificates-support/index?page=content&id=AR1130# This is what it reports, mysite is reported to be ok: ---------------- --Issued To-- Organization: VeriSign Trust Network Organizational Unit: www.verisign.com/CPS Incorp.by Ref. LIABILITY LTD.(c)97 VeriSign Organizational Unit 2: VeriSign International Server CA - Class 3 Organizational Unit 3: VeriSign,, Inc. --Issued By-- Organization: VeriSign,, Inc. Organizational Unit: Class 3 Public Primary Certification Authority Country: US Validity Start: Wed Apr 16 17:00:00 PDT 1997 Validity End: Wed Jan 07 15:59:59 PST 2004 ---------------- Any ideas?

    Read the article

  • ACL in linux-based samba shares

    - by Odin
    If I mount a samba share like this from a linux server using ACL in ext3... mount -t cifs //192.168.0.10/smbshare /mnt/smbshare -o user=root password=secret ...and access the share with linux/smb-user smbuser. I have given smbuser write access to all catalogs, but when I write something to the share the owner becomes root (the user that mountet the share). Is there any possibility to make smbuser the owner of the files/catalogs he creates even if the share is mountet by the root-user? This case is supposed to work on a linux terminal server so many different users access the smb share (mountet by root).

    Read the article

  • Hyper-V networking....still not sure which way to go??

    - by CZhale
    We have our Hyper-V server up and running (Windows 2008 ENT SP2) and started to create some of our VMs. The server has 4 total nics. 2 onboard Broadcom 1gb nic cards and a pci dual port Intel Pro cards 1gb. Right now, I have setup 1 broadcom nic to be the hyper-v host nic, and setup the other broadcom nic for the VMs. We are not using the Intel Nics....should we be thinking about teaming?Link Aggregation?? I just want to achieve the best possible setup for the network, but have read many things for and against teaming the nics?? Thoughts?

    Read the article

  • Need Help with fixing permissions in mounted Drive

    - by Master
    I am trying a lot still my problem is not solved. I have a partion called Server and inside it i have 5 folders like Folder 1 FOlder 2 Folder 3 I am mounting the drive on startup by using following command as told to me by some senoir members and it works but with some problems /dev/sdb1 /media/Server ntfs defaults,umask=006,fmask=000,dmask=007,uid=1000,gid=1001 0 0 The problem is with this command the permission are applied to all folders like Folder 1 , Folder 2 , FOlder3 But i want that only FOlder 3 should be publicly readable and writable while all other should be private and no one should have access to that. How can i achieve that

    Read the article

  • Hiera + Puppet classes

    - by Amadan
    I'm trying to figure out Puppet (3.0) and how it relates to built-in Hiera. So this is what I tried, an extremely simple example (I'll make a more complex hierarchy when I manage to get the simple one working): # /etc/puppet/hiera.yaml :backends: - yaml :hierarchy: - common :yaml: :datadir: /etc/puppet/hieradata # /etc/puppet/hieradata/common.yaml test::param: value # /etc/puppet/modules/test/manifests/init.pp class test ($param) { notice($param) } # /etc/puppet/manifests/site.pp include test If I directly apply it, it's fine: $ puppet apply /etc/puppet/manifests/site.pp Scope(Class[Test]): value If I go through puppet master, it's not fine: $ puppet agent --test Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass param to Class[Test] at /etc/puppet/manifests/site.pp:1 on node <nodename> What am I missing? EDIT: I just left the office but a thought struck me: I should probably restart puppet master so it can see the new hiera.conf. I'll try that on Monday; in the meantime, if anyone figures out some not-it problem, I'd appreciate it :)

    Read the article

  • ORA-01019 error only as an administrator

    - by Mike
    I'm having a strange problem. I've installed the Oracle 10g client on a terminal server running Windows Server 2008R2. When I try to connect to Oracle, using, say, Toad, I receive the error "ORA-01019 unable to allocate memory in the user side". But this only happens if I'm logged in as an administrator. If I connect as a normal user, I can connect without issue. Also -- if a normal user is connected, I can then connect without a problem as an administrator. Any thoughts?

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • Using FastCGI for PHP on Mac OS X

    - by DanieL
    I have apache2 running on a Mac OS X (10.6) machine, and it is currently serving PHP pages fine, using php5_module but I would like to configure fastcgi_module to handle the php pages. I have tried using the configuration found on www.fastcgi.com but I get the following error: [warn] FastCGI: (dynamic) server "/Path/to/script.php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds [warn] FastCGI: server "/usr/bin/php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds I'm thinking this is because PHP has not been compiled with FastCGI, but seeing as it came with Mac OS X i'm not sure how to recompile it. Is this the problem? And if so, how do I recompile PHP with FastCGI?

    Read the article

  • FreeBSD Ngnix installation error

    - by Asaf Nevo
    I have a VPS which has Apache webserver installed. I'm trying to install Ngnix on it since my new server will be needing to handle large amount of connection simultaneously. I used this install guide and did: cd /usr/ports/www/nginx make install clean However I get this error: adding module in /usr/ports/www/nginx/work/arut-nginx-dav-ext-module-0e07a3e ./configure: error: no /usr/ports/www/nginx/work/arut-nginx-dav-ext-module-0e07a3e/config was found ===> Script "configure" failed unexpectedly. I'm pretty new to FreeBSD and I am used to controlling my server using Direct Admin. What shall I do next ?

    Read the article

  • Software RAID to hardware RAID, can it be done?

    - by gtaylor85
    Can it be done ... well. (For the record, I did not set this server up.) In my server there are 4 disks. 3 of them are in a software RAID5, and 1 has the OS installed. I want to buy a RAID controller, 4 new HDs and set up a hardware RAID5. If possible, I'd like to just image the current setup, and use it to build my new one. My questions are: Can I image a 3 disk RAID5 to 4 disk RAID5? Are there problems with this? What is considered best practice for your OS. To have it on a separate disk like it currently is, or to install it on the RAID5? Thank you. I can clarify anything. I'm not sure what other info might be pertinent.

    Read the article

  • php-fpm or nginx: bad gateway

    - by John Tate
    I'm getting a bad gateway error all the sudden for a site. I didn't change the configuration for the site, I just added a new server config where I put them under /etc/nginx/servers and it stopped working. The new server works, and there is no conflict between the php-fpm listen addresses. server { listen 80; server_name obfuscated.onion; location = / { root /var/www/sites/obfuse; index index.php; } location / { root /var/www/sites/obfuse; index index.php; if (!-f $request_filename) { rewrite ^(.*)$ /index.php?q=$1 last; break; } if (!-d $request_filename) { rewrite ^(.*)$ /index.php?q=$1 last; break; } } error_page 404 /index.php; location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { root /var/www/sites/obfuse; access_log off; expires 30d; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/sites/obfuse$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } } There is nothing unusual in php-fpm's log even when I raised the level to debug. [24-Jun-2013 09:10:37.357943] DEBUG: pid 6756, fpm_scoreboard_init_main(), line 40: got clock tick '100' [24-Jun-2013 09:10:37.358950] DEBUG: pid 6756, fpm_event_init_main(), line 333: event module is kqueue and 1 fds have been reserved [24-Jun-2013 09:10:37.358978] NOTICE: pid 6756, fpm_init(), line 83: fpm is running, pid 6756 [24-Jun-2013 09:10:37.359009] DEBUG: pid 6756, main(), line 1832: Sending "1" (OK) to parent via fd=5 [24-Jun-2013 09:10:37.389215] DEBUG: pid 6756, fpm_children_make(), line 421: [pool cyruserv] child 22288 started [24-Jun-2013 09:10:37.391343] DEBUG: pid 6756, fpm_children_make(), line 421: [pool cyruserv] child 21911 started [24-Jun-2013 09:10:37.391914] DEBUG: pid 6756, fpm_event_loop(), line 362: 5776 bytes have been reserved in SHM [24-Jun-2013 09:10:37.391941] NOTICE: pid 6756, fpm_event_loop(), line 363: ready to handle connections [24-Jun-2013 09:10:38.393048] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 [24-Jun-2013 09:10:39.403032] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 [24-Jun-2013 09:10:40.413070] DEBUG: pid 6756, fpm_pctl_perform_idle_server_maintenance(), line 379: [pool cyruserv] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1 I don't know why this has started happening, but the logs are not telling me anything. Please ask for more information than this, you'll probably need it.

    Read the article

  • prevent search engines indexing depending on domain

    - by Javier
    We have a dedicated server with a hosting company with a couple of dozens of webs in it. It happens that the nameservers (EG: ns1.domain.com, ns2.domain.com) ip's are coincident with some client webs, let's say webclient1.com and webclient2.com Problem is that for a certain searches in google, some results are showing up like ns1.domain.com/result instead of webclient1.com/result which is pretty wrong and annoying for our clients. Actually if you type in the browser ns1.domain.com or ns2.domain.com it will load some pageclients instead. Is there any way to prevent google to track those results only in case the robots are coming to check ns domains? It may be not correct to ask this as well, but why is it happening? is it a result of a bad server configuration? I'm pretty new on these matters, so thank you in advance for any help!

    Read the article

  • Hot swapping for Linux web/database servers

    - by Art
    Is there a way to perform the following under Linux: There are two web servers, main and backup There are two database servers (postgres), main and backup Web Servers are in sync with each other, ie. configuration/content/applications are the same Backup database is continuously synced up with main database. If either of main servers goes down, it's being replaced with backup one on the fly. When main database server goes back up, all the data from backup server is uploaded to it. Essentially, I need the hot swapping working automatically with no or minimal user intervention, if possible. Recovery procedure is preferably automatic but can include some manual steps.

    Read the article

  • My website is infected, I restored a backup of the uninfected files, how long will it take to un-mark as dangerous?

    - by Cyclone
    My website www.sagamountain.com was recently infected by a malware distributor (or at least I think it may have been). I have removed all external content, google ads, firefly chat, etc. I uploaded a backup from a few weeks ago, when there was no issue. I patched the SQL injection hole. Now, how long will it take to unmark it as dangerous? Where can I contact google? I am not sure if this is the right place to post it, but since it may have been a server issue I may as well. Can sites inject base64 code via a virus on the whole server, or is it only via sql injection? Thanks for the help, viruses freak me out. Is there an online virus scanner that can scan my page and tell me what is wrong?

    Read the article

  • Why is it a bad idea to use multiple NAT layers or is it?

    - by iamrohitbanga
    The computer network of an organization has a NAT with 192.168/16 IP address range. There is a department with a server that has an IP address 192.168.x.y and this server handles hosts of this department with another NAT with the IP address range 172.16/16. Thus there are 2 layers of NAT. Why don't they have subnetting instead. This would allow easy routing. I feel multiple layers of NAT can cause performance losses. Could you please help me compare the two design strategies.

    Read the article

  • Can't get DHCPd to assign IPs to unknown clients

    - by Jakobud
    I'm using Webmin to admin our DHCPd server. But I'm having a hard time getting it to assign IP addresses to unknown clients. The only way I can get it to assign an IP is to make sure a host is added to DHCPd as a host so that it gets a static-lease IP assigned to it. I thought "Allow Unknown Clients" was the key, but it still isn't assigning IPs to unknown clients. I have a pool setup so that the unknown clients should get an IP between 10.20.0.200 - 10.20.0.249. Here is the config file. What am I missing here? allow unknown-clients; # Primary DHCP server config authoritative; ddns-update-style none; failover peer "dhcp-failover" { primary; address 10.20.0.30; port 647; peer address 10.20.0.25; peer port 647; max-response-delay 60; max-unacked-updates 10; load balance max seconds 3; mclt 3600; split 128; } subnet 10.20.0.0 netmask 255.255.255.0 { allow unknown-clients; option subnet-mask 255.255.255.0; option broadcast-address 10.20.0.255; option routers 10.20.0.100; option domain-name "ourdomain.com"; option domain-name-servers 192.168.10.20; default-lease-time 86400; max-lease-time 86400; option ntp-servers 192.168.10.20; option time-offset -25200; pool { allow unknown-clients; failover peer "dhcp-failover"; max-lease-time 86400; range 10.20.0.200 10.20.0.249; deny dynamic bootp clients; } host Server-myserver { option host-name "whatever.ourdomain.com"; hardware ethernet 00:89:D4:35:4F:13; fixed-address 10.20.0.23; } }

    Read the article

  • Apache+FastCGI Timeout Problem

    - by Sadjad Fouladi
    Hi all. I've recently installed mod_fastcgi and Apache 2.2. I've a simple cgi script as below (test.fcgi): #!/bin/sh echo sadjad But when I invoke "mysite.com/test.fcgi" I see "Internal Server Error" message after a short period of time. The error.log file shows this error message: [Tue Jan 31 22:23:57 2006] [warn] FastCGI: (dynamic) server "~/public_html/oaduluth/dispatch.fcgi" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds This is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ django.fcgi/$1 [QSA,L] I'm very confused, please help me! [Sorry for my poor English!]

    Read the article

  • Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

    - by FamousInteractive
    I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed! The projects use the following rules (from HTML5 boilerplate) in the .htaccess: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # HTML, TXT, CSS, JavaScript, JSON, XML, HTC: <IfModule filter_module> FilterDeclare COMPRESS FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype FilterChain COMPRESS FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no </IfModule> </IfModule> We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell. I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression. Example response headers of a PHP output: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Accept-Ranges: bytes Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: public Pragma: no-cache Vary: User-Agent Keep-Alive: timeout=5, max=98 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 Example of response headers on a css file: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT Vary: Accept-Encoding,User-Agent Content-Encoding: gzip Cache-Control: public Expires: Fri, 25 May 2012 23:30:59 GMT Content-Length: 714 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css; charset=utf-8 Does anyone has a clue or experienced the same "problem"? thanks!

    Read the article

  • Exim4 Disable local delivery?

    - by Robert Ross
    Hey all, I'm running exim4 as my MTA and it works great to send emails to outside emails other than my hostname. When I send an email to my gmail via command line (sendmail [email protected], etc...) it works fine. When I send an email to my website's domain, which is also the hostname for the server, i'm assuming it just does local delivery... which won't work because my email is received by another server (Google Apps). So how do I disable local delivery in Exim4? dpkg-reconfigure exim4-config did not give any real results.

    Read the article

  • IIS NLB Web Farm to front Single Tomcat Instance

    - by Brent Pabst
    I've got a single Tomcat 6 server that hosts a JSP app. We just spun up a new IIS 7.5 web farm to host our other internal apps. Currently the machine that hosts Tomcat is also running IIS 7 with the ISAPI filter loaded to provide front-end handling for the JSP app. I'd like to move the IIS portion to the web farm to consolidate our IIS presence and let the Tomcat server just serve and run Java and Tomcat. Has anyone done this, is it even possible while ensuring session state is properly maintained? I had it up and running using the IIS Tomcat Connector http://tomcatiis.riaforge.org/ but after a while the communication between the boxes slowed and pages would not load. In addition it seemed like some of our authentication tickets were timing out. Thanks for any ideas or reference material!

    Read the article

  • Problem installing SSL on centos 5.2 with plesk

    - by Haluk
    Hello, I'm trying to install an ssl certificate to a dedicated centos 5.2 server. I followed the hosting company's instructions but the ssl is not working. When I try to access my website using https, Firefox gives the following error: uses an invalid security certificate. The certificate expired on 3/13/2010 11:56 AM. (Error code: sec_error_expired_certificate) I'm not sure where the problem is. You should also know that this server has plesk installed, even though I'm not using it, it could potentially be somehow overriding my httpd.conf or ssl.conf. Thanks!

    Read the article

  • Windows service running under network credentials doesn't autostart

    - by David Alpert
    I have a Subversion Server running as a resident service on a Windows XP Pro machine. That service needs to access a secure network fileshare, so I used the Services-Properties-Log On tab to tell the service to run as a user who has access to the target fileshare. That works out fine until the machine restarts, when the service fails to autostart. I am able to start it manually by logging in, going back to that Services-Properties-Log On tab and reconfiming the explicit credentials. Do I have to manually start this service under alternate credentials every time the machine reboots? Is there something else I can do to make sure that my Subversion server service autostarts with proper access to authenticate against this network share?

    Read the article

< Previous Page | 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306  | Next Page >