Search Results

Search found 8343 results on 334 pages for 'split dns'.

Page 269/334 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

  • Sonicwall NSA 240, Configured for LAN and DMZ, X0 and X2 on same switch - ping issues

    - by Klaptrap
    Our Sonicwall vendor supplied and networked the NSA240 when we required a DMZ in our infrastructure. This was configured and appeared correct although VPN users periodically dropped DNS and Terminal Services. The vendor could not resolve and so the call was escalated to Sonicwall. The Sonicwall support engineer took a look and concluded that the X0 (LAN) and X2 (DMZ) intefaces were cabled to the same switch and so this is the issue. What he observed is a ping request to the LAN Domain Controller, from a connected VPN user, is forwarded (x0) from the VPN client IP to the DC IP but the ping response from the DC IP to the VPN client IP is on X2, a copy of the log is detailed below:- 02/02/2011 10:47:49.272 X1*(hc) X0 192.168.1.245 192.168.1.8 IP ICMP -- FORWARDED 02/02/2011 10:47:49.272 -- X0* 192.168.1.245 192.168.1.8 IP ICMP -- FORWARDED 02/02/2011 10:47:49.272 X2*(i) -- 192.168.1.8 192.168.1.245 IP ICMP -- Received X0 - LAN X1 - WAN X2 - DMZ The Sonicwall engineer concluded that we either need a seperate switch for X2 or we use a VLAN switch for both. I am the companies software engineer and we have yet to have heard back from the vendor, so I am lost at sea at the moment. Do we need to buy this additional equipment or is there another configuration on the NSA240 we can use?

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • How to deploy website in IIS with a host name?

    - by Jayakumar
    I try to host my application in IIS. Below are the steps that I follow: Publish the code and place it in a path. Open IIS, right click on "sites" and select "Add Website". In that dialog I gave the site name and selected the app pool created for the application. I selected the physical path of the published code. I left the IP and port in the binding section without changes. and, finally, gave the host name as fus.km.com. When I try to browse the application the page is not Loading "Internet Explorer cannot display the Page" The machine domain is km.com UPDATE I tried to add the host name to the host file and flushed the DNS. The application asked for user credentials (I use windows Authentication in the application). But it did not login. On repeated tries it throws the error: HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. I tried with different user to login but I get the same result.

    Read the article

  • Can't connect remotely to Windows Server 2008 R2

    - by JohnyD
    I have a new Dell R710 server running Windows Server 2008 R2. I one of it's 4 nic's set up and the rest are not being used. I have successfully given it an ip address, network mask, and dns servers. I can ping and resolve this machine from anywhere else in the network. However, when I try to connect to it via RDP it does several things: 1) it might just outright refuse me with the message, "This computer can't connect to the remote computer. Try connecting again." 2) it might connect me and let me chose the account I would like to log on as... but when you select an account then you receive the same message as in #1 3) it might actually allow you to connect but only for about 1 minute and then you receive the same message and it closes your session. I have configured the firewall service to allow for RDP over the domain network connection. This didn't have any noticible effect. I have now disabled the firewall for all 3 networks and have even stopped the Windows Firewall service. I am still having the same issue. I am new to Server 2008 R2 and things are very different. Please give me any advice you can on how to resolve this issue and/or any other gotchas that are sure to come my way. The 2003 - 2008 learning curve seems steep. Thanks

    Read the article

  • Ram question in VMware Server 2

    - by ToreTrygg
    Hi, I understand from the VMware Server 2 documentation that VMware Server 2 is capable of running a 64-bit guest OS underneath a 32-bit host OS, as long as the hardware running the box is 64-bit capable. Here's my situation. We currently have an underutilized XEON X3220 Quad Core 64bit Server, running Server 2003, 32-bit and 2gb of RAM (the motherboard is capable of 8gb ram). The server is currently used mainly for file and print services. It is also running Active Directory, Novell eDirectory and Groupwise 6.5. We are planning a micration to Microsoft Exchange, so the Novell eDirectory and Groupwise services will eventually be purged from this box, leaving only Active Directory, File and Print services. Being that this server is underutilized we are hoping to save hardware costs and virtualize our new Exchange investment. My question is this. Will VMware allow access to the "invisible" extra memory that Windows 32-bit won't see. Meaning, if we increase the full amount of system ram to 8gb (yes, I know the 32-bit host OS will only see a maximum of 4gb), will I be able to assign maybe 5gb to the new Server 2008 64-bit OS running Exchange and leave 3gb for the Guest OS (or maybe even a 6, 2 split). The second part of that would be, would it be better to just convert the main OS currently running to an image, convert the machine itself to ESXi and run both OSes as images under ESXi. Downtime for this box is critical, so my preference is most definitly with the first option because it presents very minimal downtime. Doing the second would make downtime quite a few hours to image the machine and then convert the image to a VMware Image.

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • PEAP validating a secondary domain suffix

    - by sam
    Probably the title is a little bit confusing, let me explain the situation. Our company wants to implement a corporate wireless lan with PEAP authentication. unfortunately someone made a big mistake in our AD design 10 years ago. The domain name we are using "company.ch" is not owned by company but by someone else. so it is not possible to issue a public SSL certificate for the RADIUS server. Our AD is to big to rename it. We already thought about using our private PKI and rollout the CA certificate via GPO but that would only cover our corporate managed clients but not the BYOD (Smartphones, Tablets, Laptops..) Is there a way to add a secondary domain name like “company2.ch” and issue a public certificate and join that radius to that secondary domain aslwell, and configure that secondary dns suffix via DHCP for all the client pools... or is there another way with for example a new radius server which has his own domain company2.ch which is connected with some kind of trust between the company.ch doamin? sorry i'am not a client server guy.. hopefully you get my drift.!?

    Read the article

  • 553-Message filtered - HELO Name issue?

    - by g18c
    I am having major issues sending from my SBS2011 machine to Message labs server-13.tower-134.messagelabs.com #553-Message filtered. Refer to the Troubleshooting page at 553-http://www.symanteccloud.com/troubleshooting for more 553 information. (#5.7.1) ## I have changed the IP and hostnames from the below. I am not on any IP or domain blacklists. I have setup SPF (which includes mailchimp servers): v=spf1 mx a ip4:95.74.157.22/32 a:remote.mydomain.com include:servers.mcsv.net ~all I am sure i have setup my HELO names correctly under the Exchange Management console, sending a test email from the SBS server and looking at the header shows the following: X-Orig-To: [email protected] X-Originating-Ip: [95.74.157.22] Received: from [95.74.157.22] ([95.74.157.22:52194] helo=remote.mydomain.com) by smtp50.gate.ord1a.rsapps.net (envelope-from <[email protected]>) (ecelerity 2.2.3.49 r(42060/42061)) with ESMTP id 11/90-10010-E529C835; Mon, 02 Jun 2014 11:04:09 -0400 Received: from MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef]) by MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef%10]) with mapi id 14.01.0438.000; Mon, 2 Jun 2014 19:03:56 +0400 Is is the main helo name there OK and do i need to worry about the second Received block where the MYSBSVR.mydomain.local is mentioned? I have asked the ISP to set the reverse DNS for my IP to remote.mydomain.com but they have instead put remote.MYDOMAIN.com - would this case cause HELO lookups to classify this as not matching? Anything else I can do to find out why i am being filtered?

    Read the article

  • RRAS Problem routing to central site from RRAS server only?

    - by TomTom
    Given is an office connected to headquarters using a RRAS bridge (2 virtual machines using RRAS to route between the two networks). Naming: The office is A, the RRAS on A is a-lnk. THe headquartters is B, b-lnk the RRAS machine there. The VPN works perfectly - machines can ping and work between the sites. Domain controllers on both ends replicating, DFS working, remote desktop working. All in all... everything is fine. EXCEPT: a-lnk itself can not reach any machine in B. This would normally not be troublesome (noone ever does anything on a-lnk), but there are two exceptions: * a-lnk is supposed to get it's license from a KMS in B, so not being able to reach B means it is not prolonging. * a-lnk is supposed to pull updates from a WSUS in B - and not being able to reach B means - no updates. Given that thigns work (and security is a minor issue - A-lnk is not reachable from the internet as it is behing a NAT hardware anyway) this got not handled for months. I just wan to get this item ticked off now. Anyone an idea what this is? It definitely is not a "dns does not work" or "routing in general is bad" item, as any computer in A can connect to any computer in B, and the other way arount - only the RRAS computer itself seems to do something really awkward. Platform for both: 2008 R2 standard.

    Read the article

  • Website Use Monitoring for 3 People

    - by linkedlinked
    I work in an IT startup with 2 partners, and I'm the programmer/IT guy -- in other words, the work horse. To make a long story short, I'm doing most of the work right now, while they spend all day on Facebook. That's OK, because they're paying my salary, but if the project fails, I'm sure they'll blame me for it (I'm doing my best to make sure that doesn't happen!), and I want some sort of recourse. I already have an app that blocks time-wasters on my local PC, and keeps logs of when the app is enabled (so I can say "I had Facebook blocked from 9am-5pm today.") Is there any way I can get a brief summary of the most heavily visited sites, split up by client PC? At the end of the month, I want to be able to say "You both load Facebook, on average, every 10 minutes. You spend hours a day on Youtube, and haven't opened up our bugtracker in weeks" and maybe have a nifty chart or graph to match it. We have a crappy D-Link router, and no IT budget. They are both on Windows Vista, I run Ubuntu Linux. I don't want to install any monitoring software on their PC, but I'm totally fine with, say, routing all the network traffic through my machine. I guess I can think of lots of ways to accomplish this (telnet into JSSH and list open tabs? log all the DNS requests, per-domain? even thinking of setting up a webcam on my desk and just keeping 5-minute snapshots...), I just don't really know where to start. Any advice is appreciated, thanks!

    Read the article

  • Require and Includes not Functioning Nginx Fpm/FastCGI

    - by Vince Kronlein
    I've split up my FPM pools so that php will run under each individual user and set the routing correctly in my vhost.conf files to pass the proper port number. But I must have something incorrect in my environment because on this new domain I set up, require, require_once, include, include_once do not function, or rather, they may not be getting passed up to the interpreter to be rendered as php. Since I already have a Wordpress install on this server that runs perfectly, I'm pretty sure the error is in my server block for nginx. server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { if (!-e $request_filename) { rewrite ^(.*)$ /index.php?name=$1 break; } fastcgi_pass 127.0.0.1:9002; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } The problem I'm finding I think is that there are dynamic calls to the doc root index file, while all calls to anything within a sub-folder should be routed as normal ie: NOT passed to index.php. I can't seem to find the right mix here. It should run like so: domain.com/cindy (file doesn't exist) --> index.php?name=$1 domain.com/admin/anyfile.php (files DO exist) --> admin/anyfile.php?$args

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

  • System Slow After Uprading Ubuntu

    - by Aragon N
    I have an Ubuntu network machine which has release of 10.04.1 LTS Lucid. On this system I have Apache, PostgreSQL and django. For some app. development I have to install PGP and php-curl. Due to being on network, I have exported a VMware machine to the Internet and firstly I have upgraded the system and then installed php5 packages on it. I don't know is it all about django or apache configuration. Maybe some Apache settings had changed. In this case in apache where I have to look at ? After all replacing it with its old place, I see that the new system query is slow according to another. Old system query time : 140 ms New system query time : 9.11 s I have checked /etc/network interface and it seems there is no problem. I have checked /etc/resolv.conf and it seems OK I have checked /etc/nsswitch.conf and only host section is different from old one which old system has hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 and then I have checked time host -t A services.myapp.com and I got real 0m0.355s user 0m0.010s sys 0m0.020s and I have checked apache2 HostnameLookups : find /etc/apache2/ -type f | xargs grep -i HostnameLookups It returned : /etc/apache2/apache2.conf:# HostnameLookups: Log the names of clients or just their IP addresses /etc/apache2/apache2.conf:HostnameLookups Off and now what can I have to check for boosting my system as before?

    Read the article

  • XenServer VMs can't reach network

    - by toto
    I'm currently trying to setup a small cloud architecture , I'm using in the installation CloudStack 2.2.14 which need two node : a management server (as node1) to provision the cloud and a hyperviser XenServer 5.6 SP2 to host the VMs (as node2). I succeded to create both node1 and node2 into an ESXi 5 VMWare as VMs. So The ESXi 5 is hosting two VMs node1 + node2 , and node2 which is the XenServer will host also VMs (such as ubuntu or Centos). Both node1 and node2 can ping each other and can get the internet connection from Esxi5 ,but My problem is : that VMs into the node2(XenServer) can't reach the network (can't ping node1 or Esxi or get an internet connection but they can ping VMs IN the node2(XenServer). So I tried to: 1-Setup a DHCP server as node3 in ESXi5 and connect node2(Xenserver) to him , but always the VMs into to node2 can't reach the outer network. 2-Setup a DCHP server into node2 , but always the same problem. So , 1-is there any other configuration i'm missing in node2 (considering that I'm sure about DNS , GW , NETMASK configuration)?. 2-Is it the problem because i'm Creating VMs into node2(XenSever) which is a VM into ESXi 5 ?

    Read the article

  • NTLM, Kerberos and F5 switch issues

    - by G33kKahuna
    I'm supporting an IIS based application that is scaled out into web and application servers. Both web and applications run behind IIS. The application is NTLM capable when IIS is configured to authenticate via Kerberos. It's been working so far without a glitch. Now, I'm trying to bring in 2 F5 switches, 1 in front of the web and another in front of the application servers. 2 F5 instances (say ips 185 & 186) are sitting on a LINUX host. F5 to F5 looks for a NAT IP (say ips 194, 195 and 196). Created a DNS entry for all IPs including NAT and ran a SETSPN command to register the IIS service account to be trusted at HTTP, HOST and domain level. With the Web F5 turned on and with eachweb server connecting to a cardinal app server, when the user connects to the Web F5 domain name, trust works and user authenticates without a problem. However, when app load balancer is turned on and web servers are pointed to the new F5 app domain name, user gets 401. IIS log shows no authenticated username and shows a 401 status. Wireshark does show negotiate ticket header passed into the system. Any ideas or suggestions are much appreciated. Please advice.

    Read the article

  • Bouncing between a 502 and 503 error

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • How do I prevent or override a group policy on Windows 7?

    - by Kevin
    A few months ago my company was purchased by a large corporation. We recently switched our network over to the large corporate network which has more restrictions requirements. One of these is the requirement to use a proxy server for Internet traffic. However, some of our internal servers are not recognized by the corporate DNS, so we need to provide the fully qualified domain name. For W7, we make changes to the Internet Properties for IE8 and Chrome to include our domain name as an exception to the proxy server (e.g., *.foobar.com). The problem is that a group policy that does not include our domain name is continually pushed out to my systems throughout the day. This requires me to make the appropriate changes to the Internet Properties several times a day in order to access our internal servers. Is there a way that I can prevent the group policy from being pushed to my systems or detect when the group policy is pushed and override it? I am an administrator on all of my systems. I do have Firefox installed which is not subject to the same group policy push, but I need to have IE8 and Chrome working.

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • BackupExec 2012 File System Archiving - Access is denied to Remote Agent

    - by AllisZero
    Gentlemen, I've been struggling with a Trial version of Symantec Backup Exec 2012 for about a week now. It was installed as an upgrade to our 12.5 license, and the setup completed with no issues. The reason I upgraded is solely for the File System Archiving option as I'm working to reduce the amount of live data in my servers. Backups work A-Ok and I have followed the instructions in the Admin Manual to make sure I had filled all requirements. The account BE is running under is a member of the Local administrators group as required and has been added to the test share that I'm using to evaluate the archiving function. Testing the credentials in the job setup window always works fine, and I am able to add both regular and Admin$ shares to my Archive selection. However, every time I run the Archive job, I get the following message: https://dl.dropbox.com/u/59540229/BEXec.png I've already tried to troubleshoot DNS resolution issues as suggested in the Symantec KB to no avail. The only thing I can think of, at this point, is that a trial license doesn't allow me to use the Archiving function, although that would seem silly on their part. Appreciate any assistance or information. Thanks.

    Read the article

  • Upstart multiple instances of service not working

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • How to test a HTTPS URL with a given IP address

    - by GreatFire
    Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD. So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD'. In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD. But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD'. Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?

    Read the article

  • Site resolves fine without "www", "www" creates database error

    - by PatrickS
    Working with BOA ( Barracuda / Octopus / Aegir ) , I've installed a few Drupal sites without any problems and following the same process for all. BOA is running on Nginx. All sites are going thru Cloudflare's network , where I set the same DNS settings. A example.com points to IPADDRESS A www points to IPADDRESS the nameservers of each domain are pointing to Cloudflare's respective nameservers. It all works , except for one site that works perfectly without "www" , but with "www" returns the typical database error if Drupal can't find the site's database. Site off-line The site is currently not available due to technical problems. Please try again later. Thank you for your understanding. If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. In BOA, all sites have the same alias, basically a symlink, redirecting like this... www.example.com - example.com

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >