Search Results

Search found 26114 results on 1045 pages for 'zend config xml'.

Page 805/1045 | < Previous Page | 801 802 803 804 805 806 807 808 809 810 811 812  | Next Page >

  • One Apache VirtualHost entry overrides another?

    - by johnlai2004
    I can't tell why one apache virtual host entry keeps overriding another. The following file // filename: cbl <VirtualHost 74.207.237.23:80> ServerAdmin [email protected] ServerName completebeautylist.com ServerAlias www.completebeautylist.com DocumentRoot /srv/www/cbl/production/public_html/ ErrorLog /srv/www/cbl/production/logs/error.log CustomLog /srv/www/cbl/production/logs/access.log combined </VirtualHost> keeps overriding this file // filename: theccco.org <VirtualHost 74.207.237.23:80> SuexecUserGroup "#1010" "#1010" ServerName theccco.org ServerAlias www.theccco.org ServerAlias webmail.theccco.org ServerAlias admin.theccco.org DocumentRoot /home/theccco/public_html ErrorLog /var/log/virtualmin/theccco.org_error_log CustomLog /var/log/virtualmin/theccco.org_access_log combined ScriptAlias /cgi-bin/ /home/theccco/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/theccco/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks allow from all AllowOverride All </Directory> <Directory /home/theccco/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.theccco.org RewriteRule ^(.*) https://theccco.org:20000/ [R] RewriteCond %{HTTP_HOST} =admin.theccco.org RewriteRule ^(.*) https://theccco.org:10000/ [R] Alias /dav /home/theccco/public_html <Location /dav> DAV On AuthType Basic AuthName theccco.org AuthUserFile /home/theccco/etc/dav.digest.passwd Require valid-user ForceType text/plain Satisfy All RewriteEngine off </Location> </VirtualHost> I tried a2ensite, a2dissite, and reloading I get this message * Reloading web server config apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Thu Apr 15 10:47:36 2010] [warn] NameVirtualHost 74.207.237.23:443 has no VirtualHosts Aside from that, I don't know what else could be wrong. Can anyone tell me what to do?

    Read the article

  • IIS 6 Windows Authentication in ASP.Net app fails

    - by Kjensen
    I am trying to install an ASP.Net app on an IIS6 webserver. The site requires the user to authenticate with windows, and this works on several other apps on the same server. In IIS I have enabled anonymous access and windows authentication. In web.config, authentication is set to: <authentication mode="Windows"/> and authorization...: <authorization> <allow roles="Users"/> <deny users="*"/> </authorization> Ie. allow all users in role "Users" and deny everybody else. This is the approach that is working with several other apps on the same server. If I run the site, I am prompted for username and password. If I remove the line: <deny users="*"/> I can access the site and everything works - but the user credentials are not passed to the site (Page.User.Identity.Name returns a blank string in ASP.Net). The site has identical (inherited) file permissions as other working sites on the server. The only difference in authentication/authorization between this site and the other working sites is, that this runs Asp.Net 4 (but there are other working asp.net 4 sites on the server as well). What am I missing here? Where should I look?

    Read the article

  • How do I clear out the ssh-agent entries (on Mac OS X )?

    - by cwd
    I'm running Mac OS X, and it appears that after SSHing to several machines, using identity files, my 'ssh-agent' builds up a lot of identity / keys and then sometimes offers too many to a remote machine, causing them to kick me off before connecting: Received disconnect from 10.12.10.16: 2: Too many authentication failures for cwd It's pretty obvious what's happening, and this page talks about it in more detail: SSH servers only allow you to attempt to authenticate a certain number of times. Each failed password attempt, each failed pubkey/identity that is offered, etc, take up one of these attempts. If you have a lot of SSH keys in your agent, you may find that an SSH server may kick you out before allowing you to attempt password authentication at all. If this is the case, there are a few different workarounds. Rebooting clears the agent and then everything works OK again. I can also add this line to my .ssh/config file to force it to use password authentication: PreferredAuthentications keyboard-interactive,password Anyhow, I saw the note on the page I referenced talking about deleting keys from the agent, but I'm not sure if that applies on a Mac since they appear to be cleared after reboot anyhow. Is there a simple way to clear out all keys in the 'ssh-agent' (the same thing that happens at reboot)?

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • Selecting Interface for SSH Port Forwarding

    - by Eric Pruitt
    I have a server that we'll call hub-server.tld with three IP addresses 100.200.130.121, 100.200.130.122, and 100.200.130.123. I have three different machines that are behind a firewall, but I want to use SSH to port forward one machine to each IP address. For example: machine-one should listen for SSH on port 22 on 100.200.130.121, while machine-two should do the same on 100.200.130.122, and so on for different services on ports that may be the same across all of the machines. The SSH man page has -R [bind_address:]port:host:hostport listed I have gateway ports enabled, but when using -R with a specific IP address, server still listens on the port across all interfaces: machine-one: # ssh -NR 100.200.130.121:22:localhost:22 [email protected] hub-server.tld (Listens for SSH on port 2222): # netstat -tan | grep LISTEN tcp 0 0 100.200.130.121:2222 0.0.0.0:* LISTEN tcp 0 0 :::22 :::* LISTEN tcp 0 0 :::80 :::* LISTEN Is there a way to make SSH forward only connections on a specific IP address to machine-one so I can listen to port 22 on the other IP addresses at the same time, or will I have to do something with iptables? Here are all the lines in my ssh config that are not comments / defaults: Port 2222 Protocol 2 SyslogFacility AUTHPRIV PasswordAuthentication yes ChallengeResponseAuthentication no GSSAPIAuthentication no GSSAPICleanupCredentials no UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes ClientAliveInterval 30 ClientAliveCountMax 1000000 UseDNS no Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • Unable to access newly created web site in IIS 7.5

    - by Animesh
    Configuration: 32-bit Windows 7 development machine with IIS 7.5 I created a new web site in IIS to host only MVC sites called MVCHOST. The physical path to this website is set as C:\inetpub\mvcroot. I created a new v4.0 pool called mvcpool for this purpose. I have given Modify rights to IIS_WPG, IIS_IUSRS, ASPNET accounts. I created this web site with a host header "mvchost" and port 80, in the hopes of browsing MVC sites in the following way: mvchost/mvcapp1 mvchost/mvcapp2 instead of localhost/mvcapp1 localhost/mvcapp2 The only binding I set is the default one: http:*:80:mvchost. I have also copied the files iisstart.htm, web.config, welcome.png and folder aspnet_client from wwwroot over to mvcroot. Now when I try to the browse this site from IIS manager, I get the following error: This webpage is not available If I leave out the host header and give some port, say 99, I can access this website at localhost:99. What am I missing here? Why am I unable to access the web site at: http://mvchost/?

    Read the article

  • Apache: redirect to https before AUTH for server-status

    - by Putnik
    I want to force https and basic auth for server-status output (mod_status). If I enable auth and user asks for http://site/server-status apache first asks for pass, then redirects to httpS, then asks for pass again. This question is similar to Apache - Redirect to https before AUTH and force https with apache before .htpasswd but I cannot get it work because we are speaking not about generic folder but Location structure. My config (shortly) is as follows: <Location /server-status> SSLRequireSSL <IfModule mod_rewrite.c> RewriteEngine on RewriteBase /server-status RewriteCond %{HTTPS} off RewriteCond %{SERVER_PORT} 80 RewriteRule ^ - [E=nossl] RewriteRule (.*) https://site/server-status} [R=301,L] </IfModule> SetHandler server-status Order deny,allow Deny from all Allow from localhost ip6-localhost Allow from 1.2.3.0/24 Allow from env=nossl AuthUserFile /etc/httpd/status-htpasswd AuthName "Password protected" AuthType Basic Require valid-user Satisfy any </Location> I assume Allow from env=nossl should allow everyone with RewriteCond %{HTTPS} off and server port 80, then force it to redirect but it does not work. Please note, I do not want force to SSL the whole site but /server-status only. If it matters the server has several sites. What am I doing wrong? Thank you.

    Read the article

  • apache adress based access control

    - by stijn
    I have an apache instance serving different locations, eg https://host.com/jira https://host.com/svn https://host.com/websvn https://host.com/phpmyadmin Each of these has access control rules based on ip adres/hostname. Some of them use the same configuration though, so I have to repeat the same rules each time: Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com Is there a way to make these reusable so that I don't have to change each instance everytime something changes? Ie, can I write this once, then use it for a couple of locations? Using SetEnvIf maybe? It would be nice if I could do something like this pseudo-config: <myaccessrule> Order Deny,Allow Deny from All Allow from 10.35 myhome.com mycollegueshome.com </myaccessrule> <Proxy /jira*> AccessRule = myaccessrule </Proxy> <Location /svn> AccessRule = myaccessrule </Location> <Directory /websvn> AccessRule = myaccessrule </Directory>

    Read the article

  • iptables to block VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • VMware NAS/iSCSI recommendations - smallish organization

    - by Bubnoff
    I have two VMware servers - ESX + ESXi. Two backup NAS boxes. The current NAS boxes are low-cost and unsuitable for running VMs from. Support NFS only. Slow. My plan is to have a dedicated iSCSI/NAS for storing and running VMs. Two additional low-cost boxes for backup. I'm looking for advice regarding 2 things really: Recommendations as far as VMware architecture/design for a smaller organization. Less than 20 Virtual Machines. 2 servers + 2 x 1.5 terabyte backup NAS boxes. A good NAS/iSCSI box with your recommendation on RAID config ...I would go with 6 or better. I'm trying to design an installation that is both fast and reliable/redundant. If you have any experiences to share or your current configuration including network design ( switches, fiber ...etc ), I will be enormously thankful. I'm not married to this idea, so if you have a design not using iSCSI NAS boxes ...let er rip. Cost? Can we stay around $5,000 ( on top of already stated components )? Links to info are welcome also. Thanks for reading! Bubnoff

    Read the article

  • Need help trying to diagnose Symmetrix SAN performance issues

    - by arcain
    I am helping to benchmark hardware for a new SQL Server instance, and the volume presented to the OS for the data files is carved from a set of spindles on a Symmetrix SAN. The server has yet to have SQL Server installed, so the only activity on the box is our benchmarking. Now, our storage engineers say that this volume and it's resources are dedicated to our new server (I don't have access to see the actual SAN config) however the performance benchmarks are troubling. For example, the numbers look good until suddenly, and randomly, we see in our IO benchmarking tool wait times of 100 seconds, and disk queue lengths of 255 in perfmon. This SAN has an 8 GB cache, plus there are other applications besides ours that use the SAN. I'm wondering if (even though the spindles for our volumes should be dedicated to us) the cache may be getting hammered during the performance testing, or perhaps the spindles our volumes are on aren't really dedicated to us. We're not getting much traction from our storage engineers in helping us track down the problem, so if anybody has experience with diagnosing a problem like this and would like to share insights and troubleshooting methodologies, I'd appreciate it.

    Read the article

  • how could application installations/configurations be easier in linux?

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • GIT : I keep having to merge my new branch

    - by mnml
    Hi, I have created a new branch and I'm working on it with others dev but for reasons when I want to push my new commits I always have to git merge origin/mynewbranch Otherwise I'm getting some errors: ! [rejected] mynewbranch -> mynewbranch (non-fast-forward) error: failed to push some refs to '[email protected]/repo.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. You asked me to pull without telling me which branch you want to merge with, and 'branch.mynewbranch.merge' in your configuration file does not tell me, either. Please specify which branch you want to use on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. If you often merge with the same branch, you may want to use something like the following in your configuration file: [branch "mynewbranch"] remote = <nickname> merge = <remote-ref> [remote "<nickname>"] url = <url> fetch = <refspec> See git-config(1) for details. Why is it not automatic? Thanks

    Read the article

  • Preventing DDOS/SYN attacks (as far as possible)

    - by Godius
    Recently my CENTOS machine has been under many attacks. I run MRTG and the TCP connections graph shoots up like crazy when an attack is going on. It results in the machine becoming inaccessible. My MRTG graph: mrtg graph This is my current /etc/sysctl.conf config # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 1 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.icmp_echo_ignore_broadcasts = 1 net.ipv4.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_max_syn_backlog = 1280 Futher more in my Iptables file (/etc/sysconfig/iptables ) I only have this setup # Generated by iptables-save v1.3.5 on Mon Feb 14 07:07:31 2011 *filter :INPUT ACCEPT [1139630:287215872] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1222418:555508541] Together with the settings above, there are about 800 IP's blocked via the iptables file by lines like: -A INPUT -s 82.77.119.47 -j DROP These have all been added by my hoster, when Ive emailed them in the past about attacks. Im no expert, but im not sure if this is ideal. My question is, what are some good things to add to the iptables file and possibly other files which would make it harder for the attackers to attack my machine without closing out any non-attacking users. Thanks in advance!

    Read the article

  • IIS crashes with unhandled exception in ASP.NET

    - by SnowCrash
    We had an issue recently with an unhandled exception in an ASP.NET C# application bringing down IIS and all application pools that it was hosting. IIS Manager was unable to restart or stop/start the service and I was unable start IIS again after killing w3wp.exe in the task manager. A system reboot restored IIS to a running state; as a primarily Linux admin, I generally consider an unplanned system reboot to resolve a software error to be an act of high heresy. Is there a way to "harden" IIS so that a faulting application does not affect anything but the request that exposes the fault? Some details on the server and application fault. IIS: 7.5 .NET: 4.0 Windows Server 2008 R2 Faulted on call to System.Net.Dns.Resolve() with a url pointing to a non-existant domain as the argument. (I'm aware that this method is deprecated but the point that a page code issue shouldn't bring down the server still stands) The exception generated was SocketException. The faulting module according to event viewer was KERNELBASE.dll The issue was resolved by wrapping the call in a try-catch, logging the exception and displaying some generic content on the page. I'm hoping that I missed something in the IIS config that would switch it to "production" mode or something.

    Read the article

  • How to limit SMTP delivery to hourly batches

    - by Jeremy W
    Moved over from StackOverflow. Sorry if you saw it there first In an effort to keep us from being labeled spammers by major ISPs (in addition to SPF records, privacy policies, CANSPAM compliance and the like) - I wanted to limit the amount of mail we send out an hour. Is this possible in W2K3 SMTP server? I was looking at outbound connection properties in the SMTP virtual server config screens...It's just not that clear if tinkering with those settings are going to do what I want. In a nutshell, I'd love mail being sent by this server to queue up and send for example, 5,000 messages every 10 minutes or so. Mail is being sent via ASP.Net. Also, I wouldn't be sending 1 million a day. Probably 30,000 tops - and doing that only a few times a month. I'm just trying to avoid a tidal wave of 30k going out in 1 minute and setting off every network spam monitoring alarm in North America. I know I could do it with a combination console app / scheduled job. My question was if there was an easier way to accomplish this with the Virtual SMTP Server settings on Win2k3 Is this possible?

    Read the article

  • Google account gives ERR_SSL_BAD_RECORD_MAC_ALERT errors

    - by Kjensen
    A couple of days ago, I started being unable to connect to accounts.google.com, which handles logins to all kinds of google services. I get this error in Chrome: Error 126 (net::ERR_SSL_BAD_RECORD_MAC_ALERT): Unknown error. In IE I get this: I assume it is the same error, just wrapped up. I run Win8 RTM. On the SAME machine, using the same network card, in a VMWare workstation image running Win7, I am able to connect perfectly. On another of my machines on my network, I am also still able to connect with no problem. My girlfriend uses the same network and has also complained a couple of times about this error (google calendar) - but this is anecdotal, since her technical troubleshooting abilities stop at "xxxx is broken". Her machine runs Win7. ;) I have rebooted, cleared cookies, do not run any antivirus/firewall, have not changed network config. The first 3-4 days after installing Win8, I did not have any problems. I have also searched, and found a hint about enabling SSL2.0 in connection settings, which did not help. Anybody know something about this error and what I can do to fix it?

    Read the article

  • Tunneling a public IP to a remote machine

    - by Jim Paris
    I have a Linux server A with a block of 5 public IP addresses, 8.8.8.122/29. Currently, 8.8.8.122 is assigned to eth0, and 8.8.8.123 is assigned to eth0:1. I have another Linux machine B in a remote location, behind NAT. I would like to set up an tunnel between the two so that B can use the IP address 8.8.8.123 as its primary IP address. OpenVPN is probably the answer, but I can't quite figure out how to set things up (topology subnet or topology p2p might be appropriate. Or should I be using Ethernet bridging?). Security and encryption is not a big concern at this point, so GRE would be fine too -- machine B will be coming from a known IP address and can be authenticated based on that. How can I do this? Can anyone suggest an OpenVPN config, or some other approach, that could work in this situation? Ideally, it would also be able to handle multiple clients (e.g. share all four of spare IPs with other machines), without letting those clients use IPs to which they are not entitled.

    Read the article

  • Missing access log for virtual host on Plesk

    - by Cummander Checkov
    For some reason i don't understand, after creating a new virtual host / domain in Plesk a few months back, i cannot seem to find the access log. I noticed this when running /usr/local/psa/admin/sbin/statistics The host in question is being scanned Main HTML page is 'awstats.<hostname_masked>-http.html'. Create/Update database for config "/opt/psa/etc/awstats/awstats.<hostname_masked>.com-https.conf" by AWStats version 6.95 (build 1.943) From data in log file "-"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Jumped lines in file: 0 Parsed lines in file: 0 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 0 new qualified records. So basically no access logs have been parsed/found. I then went on to check if i could find the log myself. I looked in /var/www/vhosts/<hostname_masked>.com/statistics/logs but all i find is error_log Does anybody know what is wrong here and perhaps how i could fix this? Note: in the <hostname_masked>.com/conf/ folder i keep a custom vhost.conf file, which however contains only some rewrite conditions plus a directory statement that contains php_admin_flag and php_admin_value settings. None of them are related to logging though.

    Read the article

  • Cisco 3560+ipservices -- IGMP snooping issue with TTL=1

    - by Jander
    I've got a C3560 with Enhanced (IPSERVICES) image, routing multicast between its VLANs with no external multicast router. It's serving a test environment where developers may generate multicast traffic on arbitrary addresses. Everything is working fine except when someone sends out multicast traffic with TTL=1, in which case the multicast packet suppression fails and the traffic is broadcast to all members of the VLAN. It looks to me like because the TTL is 1, the multicast routing subsystem doesn't see the packets, so it doesn't create a mroute table entry. If I send out packets with TTL=2 briefly, then switch to TTL=1 packets, they are filtered correctly until the mroute entry expires. My question: is there some trick to getting the switch to filter the TTL=1 packets, or am I out of luck? Below are the relevant parts of the config, with a representative VLAN interface. I can provide more info as needed. #show run ... ip routing ip multicast-routing distributed no ip igmp snooping report-suppression ! interface Vlan44 ip address 172.23.44.1 255.255.255.0 no ip proxy-arp ip pim passive ... #show ip igmp snooping vlan 44 Global IGMP Snooping configuration: ------------------------------------------- IGMP snooping : Enabled IGMPv3 snooping (minimal) : Enabled Report suppression : Disabled TCN solicit query : Disabled TCN flood query count : 2 Robustness variable : 2 Last member query count : 2 Last member query interval : 1000 Vlan 44: -------- IGMP snooping : Enabled IGMPv2 immediate leave : Disabled Multicast router learning mode : pim-dvmrp CGMP interoperability mode : IGMP_ONLY Robustness variable : 2 Last member query count : 2 Last member query interval : 1000

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • Losing Windows Authentication intermittently

    - by Mark Robinson
    I'm running a website on IIS6 / Server 2003 which uses Integrated Windows Authentication on a local intranet. I can browse to the site but get intermittent "Object null" errors when calling the following C# code which is called on every request: .... GetUserIdFromPrincipal(User) .... public static string GetUserIdFromPrincipal(IPrincipal principal) { return principal.Identity is WindowsIdentity ? (principal.Identity as WindowsIdentity).User.Value : principal.Identity.Name; } (Apologies for the code sample but was torn between SF and SO but think this is more to do with server config). So as the error is intermittent clearly Windows Auth is working on some level but after navigating around the site for several clicks I get the null reference error meaning IPrincipal is null (I thought this should never be null in ASP.NET). The error only happens on this newly built VM. The code is fine on other machines. Does IIS request the Windows Auth details on each request? What would cause such an intermittent problem? Any help or suggestions would be much appreciated.

    Read the article

  • nginx- Rewrite URL with Trailing Slash

    - by Bryan
    I have a specialized set of rewrite rules to accommodate a mutli site cms setup. I am trying to have nginx force a trailing slash on the request URL. I would like it to redirect requests for domain.com/some-random-article to domain.com/some-random-article/ I know there are semantic considerations with this, but I would like to do it for SEO purposes. Here is my current server config. server { listen 80; server_name domain.com mirror.domain.com; root /rails_apps/master/public; passenger_enabled on; # Redirect from www to non-www if ($host = 'domain.com' ) { rewrite ^/(.*)$ http://www.domain.com/$1 permanent; } location /assets/ { expires 1y; rewrite ^/assets/(.*)$ /assets/$http_host/$1 break; } # / -> index.html if (-f $document_root/cache/$host$uri/index.html) { rewrite (.*) /cache/$host$1/index.html break; } # /about -> /about.html if (-f $document_root/cache/$host$uri.html) { rewrite (.*) /cache/$host$1.html break; } # other files if (-f $document_root/cache/$host$uri) { rewrite (.*) /cache/$host$1 break; } } How would I modify this to add the trailing slash? I would assume there has to be a check for the slash so that you don't end up with domain.com/some-random-article//

    Read the article

  • Rails app returns HTTP 422 for new ServerAlias - Internet Explorer only

    - by Snips
    I have a long-standing Rails app running on Mac OS X (apache2). The set-up uses Apache virtual hosts and Passenger. The Rails app also uses HTTP Basic Authentication. I need to migrate the app from one url domain to another - with some overlap of both domain names being accessible simultaneously for a period. To do this, I've added the new domain name as a ServerAlias of the existing domain name in the Passenger Virtual Host config. I can now Browse the Rails app using both the legacy url, and the new url from any of Safari, Chrome, Firefox, or Internet Explorer. I can also 'HTTP post' updates to the Rails app using Safari, Chrome, or Firefox. All good. Except, attempts to post updates from Internet Explorer result in the Rails app rejecting the update, The Rails app log contains the message, ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): I have other domains & aliases working just fine on this same machine. Any suggestions as to what is causing the Rails app to reject posts from IE would be appreciated.

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

< Previous Page | 801 802 803 804 805 806 807 808 809 810 811 812  | Next Page >