Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 985/1167 | < Previous Page | 981 982 983 984 985 986 987 988 989 990 991 992  | Next Page >

  • nginx + IIS + GET

    - by Eralde
    I have nginx on pc "A" & IIS with ASP.NET on pc "B". nginx is configured like this: ... location ~ ((Web|Script)Resource.*)$ { proxy_pass "B"/$1; proxy_redirect off; proxy_set_header REMOTE_ADDR $remote_addr; proxy_set_header REQUEST_URI $request_uri; proxy_set_header HTTP_REFERER $http_referer; #proxy_set_header REQUEST_URI $request_uri; proxy_set_header QUERY_STRING $query_string; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }... but requests to "B"/WebScript?a=b&c=d aren't able to deliver GET data (a=b&c=d) to IIS part. Could anyone help with this? Edit: There's some additional info: nginx is also configured to proxy other data to Apache, running on "A" everything is fine there (at least GET is OK). configuration is the same as above, but for different location

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • upload process times out for different time zones

    - by shilezi
    I have an inhouse(NY) app that cients can upload files to. Usually uploads go pretty quickly for most clients but this particularly cient in the UK always have problems with uploads. not sure if they get any errors but since we log all exceptions, we see this...Error: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.Net.WebException: The operation has timed out at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at ... This shouldn't even be an issue because since we noticed they have had this issue before so we bumped up these to maxRequestLength="2097151" executionTimeout="14400". Further investigating this error, i read that it could be a thread timeout since the default is 20minute. A worker process with process id of serving application pool was shutdown due to inactivity. Application Pool timeout configuration was set to 20 minutes. A new worker process will be started when needed. Problem is I am not entirely sure if that really is the case as all other cients mostly North American, have no issues but then their uploads don't seem to go beyond 80mb and the UK client have done a 700mb upload before that i know of. We have tested 750mb before and the whole process took about 15min for upload and processing. Any help on what the real issue here might be? Thanks.

    Read the article

  • Should I be able to Windows Phone 8 emulator running with this AMD config

    - by pete
    I have just bought a new system as follows * AMD A4-5300 Trinity 3.4GHz * Gigabyte GA-F2A55M-DS2 Motherboard I checked and the A4-5300 supports Virtualisation. I have run the SLAT checking tool and it says that my machine supports SLAT. Been into the BIOS and ensures that SVN is enabled. I have also run the coreinfo tool that indicates that SVN is not valid fort the machine nor is NPT.. not sure why this conflicts with the true BIOS setting (I checked with GIGABYTE and seems that NPT and NX are not options that can be set in the BIOS) But.. when I run the emulator I it hangs - I can see some activity in Hyper-V manager, but it doesnt get very far and no meaningfull error messages. Been working the plethora of suggestions to get around this, but so far, nothing has worked. My question is should this setup allow the WP8 emulator to run. I am positive the CPU supports virtualisation, but are there issues with the motherboard here? I have spent a huge amount of time on this so far....am I wasting my time with the current configuration, or should it work? thanks

    Read the article

  • zfs pool error, how to determine which drive failed in the past

    - by Kendrick
    I had been copying data from my pool so that I could rebuild it with a different version so that I could go away from solaris 11 and to one that is portable between freebsd/openindia etc. it was copying at 20mb a sec the other day which is about all my desktop drive can handle writing from the network. suddently lastnight it went down to 1.4mb i ran zpool status today and got this. pool: store state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t3d0p0 ONLINE 0 0 2 c8t4d0p0 ONLINE 0 0 10 c8t2d0p0 ONLINE 0 0 0 it is currently a 3 x1tb drive array. what tools would best be used to determine what the error was and which drive is failing. per the admin doc The second section of the configuration output displays error statistics. These errors are divided into three categories: READ – I/O errors occurred while issuing a read request. WRITE – I/O errors occurred while issuing a write request. CKSUM – Checksum errors. The device returned corrupted data as the result of a read request. it was saying low counts could be any thing from a power flux to a disk event but gave no suggestions as to what tools to check and determine with.

    Read the article

  • Bacula stops writing to disk volume after 2GB

    - by m.list
    Bacula Version: 5.2.5 I have configured bacula to write volumes to disk, however bacula stops writing to the volume as soon as it reaches 2gb. The file system is not an issue as I have stored files larger than 2gb. 06-Dec 17:22 backup-sd JobId 8421: End of Volume "Full-Monthly-0005" at 0:2147475577 on device "FileStorage" (/nfs/backup-pool). Write of 64512 bytes got 8069. 06-Dec 17:22 backup-sd JobId 8421: End of medium on Volume "Full-Monthly-0005" Bytes=2,147,475,578 Blocks=33,288 at 06-Dec-2012 17:22. backup1@backup:/nfs/backup-pool$ ls -alh Full-Monthly-0005 <br> -rw-r----- 1 bacula tape 2.0G Dec 3 16:14 Full-Monthly-0005 bacula-dir.conf: Pool { Name = Full-Monthly Pool Type = Backup Recycle = yes Volume Retention = 5 months Volume Use Duration = 1 day Maximum Volumes = 5 Maximum Volume Bytes = 12gb } bacula-sd.conf: Device { Name = FileStorage Media Type = File Archive Device = /nfs/backup-pool LabelMedia = yes # lets Bacula label unlabeled media Random Access = Yes RemovableMedia = no AlwaysOpen = no Label media = yes Maximum Volume Size = 12gb } In my original configuration Maximum Volume Bytes and Maximum Volume Size were not set at all and so should have defauted to no maximum but that did not work either.

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

  • Oracle 11g Data Guard over a WAN

    - by Dave LeJeune
    Hi - We are in process of looking at using Oracle's Data Guard to replicate our 11g instance from a colo facility in Washington DC to Chicago. To give some basics we have approximately 25TB of storage and a healthy transaction rate in the 1-2K/sec range. Also, because we are processing data in real-time we have a 24x7x365 requirement for processing data. We don't have any respites as far as volume except for system upgrades (once every few months) where we take the system offline but then course experience a spike in transactions when we bring the system back on-line. Ideally we would want the second instance in the DG configuration semi-online in a read-only fashion for reports/etc. We evaluated DG in 10g and were not overly impressed and research seemed to show that earlier versions had issues with replication over a WAN but I have heard good things about modifications the product has gone through w/ 11g. Can anyone confirm an instance of this size and transaction rate being replicated over a WAN and if so what is the general latency? An information or experiences with a DG implementation that is of this size and scope would really be helpful (or larger - I also realize we are still relatively small compared to many others out there). Many thanks in advance.

    Read the article

  • Unable to connect to SQL Database (can the password be reset)

    - by user45450
    I have recently joined a company which has an SQL 2005 Server running a few databases. The server looks like no one has touched it in a couple of years and has this week it ran out of disk space. After a quick hard drive scan it looks like some of the databases have become a little bloated and particularly the Sharepoint_config~*~_log and WSS_Content_log.ldf have grown to about 15GB. I have been able to log into a couple of the other databases and use the shrinkfile command to free up disk space but for some reason I am unable to log into the sharepoint and Microsoft#SSEE databases (which gives me the "cannot connect to Sharepoint, a network related or instance specific error occurred..." when I try and connect) I can see that the database is running via the SQL surface configuration and I have made sure that the remote connection settings allow me to connect locally but I am still unable to log in either with windows authentication or locally. Is there any way to reset or recover the database login details so I can get in? ( I have tried logging in with all the administrative passwords I can find and after tracking down the company who installed it in the first place I found out that they have no idea what the password could have been)

    Read the article

  • Force Installing a Radeon HD 2100 on Windows 8

    - by Click Ok
    I'm trying force installing a Radeon HD 2100 on Windows 8. I've found that link from AMD with the drivers for Windows7: http://support.amd.com/us/gpudownload/windows/legacy/Pages/legacy-radeonaiw-vista64.aspx I know too that AMD will stop support Radeon HD 400 and older: http://www.techspot.com/news/48321-amd-drops-windows-8-support-for-radeon-hd-4000-and-older.html Now, let's go to the problem. If I try install the 12.6 driver, Windows will stick with the "basic display adapter", and this is bad for 3d games like Minecraft, that runs really slow now compared with the previous Windows7 installation. Forcing install the catalyst driver can help to fix it. So, I follow that steps: Extract the Catalist Driver (C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql) Right click the "basic display adapter" on device manager, and "update driver" Search on PC I will choose the driver "With Disk" "C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql\Packages\Drivers\Display\W76A_INF" There is a big list of drivers and the nearest driver to HD 2100 is "Radeon HD 2350 Series" My questions: Why isn't "Radeon HD 2100 Series" listed? (or Where is it listed?) In theory it must be listed" The first link above show that "This article applies to the following configuration(s):" (...) "AMD Radeon HD 2000 Series" Am I doing something wrong?

    Read the article

  • Domain joining debate for Outlook 2010 with Exchange 2007 on windows SBS 2008 for a user on a laptop that will travel a fair amount of the time.

    - by user71195
    I'm basically debating on whether or not to join the Domain on a Laptop, and was wondering if anyone has had a similar experience. If the computer were staying in the office, its a no brainer. Join the domain. In this case I have a user who will come into the office a few days a week, and work remotely the rest of the time. There is a working VPN using OpenVPN client/server, but it's not site-to-site. My knee jerk reaction is to not join the domain, so that the user can have 1 profile that they always use. In this configuration, should Outlook work properly with the user's domain account, and should the shared calendar still work (at least once inside the VPN)? My concern with joining the domain would be the inability to login to it when elsewhere. Is there maybe a way around this with caching or something? Would creating a second local login make sense for a user like this in any way? If so, why not just skip the domain join to begin with? Any thoughts on or experiences with this would be appreciated. Laptop OS Windows 7 (Not purchased yet.. pro if domain needed) Server SBS 2008, Exchange 2007 Outlook version 2010 Thanks for any help, Mike

    Read the article

  • Route all traffic of home network through VPN

    - by user436118
    I have a typical semi advanced home network scenario: A cable modem - eth A wireless router (netgear n600) eth and wlan A home server (Running ubuntu 12.04 LTS, connected over wlan) A bunch of wireless clients (wlan) Lying around I have anoher cheaper wlan router, and two different USB wlan NIC's that are known to work with Linux. ACTA struck. I want to route ALL of my WAN traffic through a remote server through a VPN. For sake of completition, lets say there is a remote server running debian sqeeze where a VPN server is to be installed. The network is then to behave so that if the VPN is not operative, it is separated from the outside world. I am familiar with general system/network practices, but lack the specific detailed knowledge to accomplish this. Please suggest the right approach, packages and configurations you'd use to reach said solution. I've also envisioned the following network configuration, please improve it if you see fit: ==LAN== Client ip:10.1.1.x nm:255.0.0.0 gw:10.1.1.1 reached via WLAN Wlan router 1: ip: 10.1.1.1 nm:255.0.0.0 gw: 10.10.10.1 reached via ETH Homeserver: <<< VPN is initiated here, and the other endpoint is somewhere on the internet. eth0: ip:10.10.10.1 nm: 0.0.0.0 gw:192.168.0.1 reached via WLAN Homeserver: wlan0: ip: 192.168.0.2 nm: 255.255.255.0 gw: 192.168.0.1 reached via WLAN ==WAN== Wlan router 2: ip: 192.168.0.1 nm: 0.0.0.0 gw: set via dhcp uplink connector: cable modem Cable Modem: Remote DHCP. Has on-board DHCP server for ethernet device that connects to it, and only works this way. All this WLAN fussery is because my home server is located in a part of the house where a cable link isnt possible unfortunately.

    Read the article

  • Identifying test machines in analytics logs

    - by RTigger
    We're just beginning to add analytics to our SaaS application, to begin (among other things) billing clients based on usage. The problem we're running into is there's a few circumstances where our support team will simulate a log in into production to try to reproduce reported issues with a client's configuration. When they log in, an entry will be made into our analytics logs that their specific account has logged in, which we use to calculate billing. A few ideas we had to solve this: 1) We log IP addresses as well as machine keys for each PC that logs in - we could filter out known IP addresses and/or machine keys belonging to support. The drawback is we have to maintain a list of keys / addresses manually. 2) If support (or anyone else internal) runs our application in debug mode (as opposed to release), it will not report analytics. This is fine, as long as support / anyone else remembers to switch to debug mode. 3) Include some sort of reg key / similar setting required to be set when configuring a production system in order to send analytics. Again, fine, as long as our infrastructure team remembers to set the reg key or setting. All of these approaches require some sort of human involvement, which we all know can be iffy at best. Has anyone run into a similar situation? Is there an automated approach to this problem? (PS Of course, we shouldn't be testing in production, but there are a few one-off instances with customer set up that we can't reproduce without logging in as them in production. This is the only time we do so, and this is the case I'm talking about in this question.)

    Read the article

  • SMF restarting service whenever there's output?

    - by Phillip Oldham
    I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. Here's part of the log output: [ Mar 30 14:59:54 Enabled. ] [ Mar 30 14:59:54 Executing start method ("java server.CustomServer"). ] Starting server... [ Mar 30 15:00:04 Method or service exit timed out. Killing contract 107. ] Running the server directly on the commandline is fine, and AFACS there are no errors being encountered during startup, other than the output. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?

    Read the article

  • How to reliably mount a shared folder /volume/folder at boot up

    - by Tanmay
    Following is my sample.sh in /usr/local/bin/ #!/bin/sh mkdir -p /Volumes/folder mount -t afp -o rw afp://user:password@server_name/folder_name /Volumes/folder Following is my com.apple.sample.plist in /Library/LaunchAgents/ ?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.apple.sample</string> <key>ProgramArguments</key> <array> <string>/usr/local/bin/sample.sh</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> Where as when I am able to run sample.sh independently and is working fine. Also I have tried using launchd.conf as mkdir -p /Volumes/folder mount -t afp -o rw afp://user:[email protected]/testsuites /Volumes/folder Still not working.

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • Server taking too long to respond error

    - by DCJones
    Hi, This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0

    Read the article

  • Enabling `mod_rewrite` apache, permissions issues

    - by rudolph9
    In attempting to enable mod_rewrite on the Apache2 web server installed with Mac OSX 10.7.4. Following these instruction, ultimately using the configuration to host CakePHP applications, I run into permissions issues accessing the site via a web browser when I set the directory block associated with cakephp site /etc/apache2/users/username.conf from: <Directory "/Users/username/Sites/"> Options Indexes FollowSymLinks MultiViews AllowOverride none Order allow,deny Allow from all </Directory> /etc/apache2/users/username.conf to: <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride none Order allow,deny Allow from all </Directory> <Directory "/Users/username/Sites/cakephp_app/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> The .htaccess files are the CakePHP 2.2.2 default as follows: /Users/username/Sites/cakephp_app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /Users/username/Sites/cakephp_app/app/webroot/.htaccess <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] </IfModule> When performing the request via a web browser at to http://0.0.0.0/~username/cakephp_app/index.php the content of the response is Not Found The requested URL /Users/username/Sites/cakephp_app/app/webroot/ was not found on this server. Apache/2.2.21 (Unix) DAV/2 PHP/5.3.10 with Suhosin-Patch Server at 0.0.0.0 Port 80 Upon a request to http://0.0.0.0/~username/ and http://0.0.0.0/~username/cakephp_app/, added to /var/log/apache2/error_log accordingly are the following: [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/Users, referer: http://0.0.0.0/~username/ [Tue Sep 04 22:53:26 2012] [error] [client 127.0.0.1] File does not exist: /Library/WebServer/Documents/favicon.ico What is causing the issue? Is there server program, ideally available via a homebrew script, which would make hosting CakePHP applications for testing purposes more effective and efficient?

    Read the article

  • Mixing both local and nonlocal addresses on three switches

    - by klew
    I have four computers that have nonlocal addresses like 150.X.X.X. Now I also get another few computers that should be only accessible through a gateway (it will be computing cluster) and they addresses are 10.0.0.X. I also wanted to include those four older computers to this new cluster, but I want them to be accessible from internet on nonlocal addresses (so I would like to set up them on both 150.X.X.X and 10.0.0.X addresses - I've set up it as interface eth0:0 since I have only one NIC). Those new computers have their switch and old computers also have their own switch. Both of them are connected to another (third) switch. The problem is that those old computers see each other (I can ping them), and also new computers see each other, but I can't ping old computer from new computer and vice versa. However pinging on nonlocal adresses works as expected. I looked into switch configuration and didn't find anything useful. I have no idea what I missed here. Can somebody help? All computers have Ubuntu Server 10.04

    Read the article

  • iptables (DNAT)

    - by user1126425
    I have a host that acts as a gateway for other hosts. The configuration is such that eth0(192.168.1.3) is connected to internet via a router and eth1(172.16.2.50) is connected to internal network via switch. Given that, this host is also running a service that is bound to eth1 and serves the internal network. I want to extend this service to the outside world as well and was trying to manipulate iptables so that any request that comes to this host via eth0 and is directed to 192.168.1.3:80 is send to 172.16.2.50 and internet users can also make use of the service. Here are my iptable rules for setting up the host as gateway (and these work fine): sudo iptables -t nat -A POSTROUTING -s 172.16.2.0/16 -o eth0 -j MASQUERADE sudo iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth1 -j MASQUERADE sudo iptables -A FORWARD -s 172.16.2.0/16 -o eth0 -j ACCEPT sudo iptables -A FORWARD -d 172.16.2.0/16 -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT And these are the rules that I am trying to add to the iptables to achieve my ends: sudo iptables -A INPUT -d 192.168.1.3 -p tcp -dport 80 -i eth0 -j ACCEPT sudo iptables -t nat -A PREROUTING -d 192.168.1.3 -p tcp -dport 80 -j DNAT --to-destination 172.16.2.50:80 sudo iptables -t nat -A PREROUTING -s 172.16.2.50 -p tcp -sport 80 -j SNAT --to-source 192.168.1.3:80 sudo iptables -A FORWARD -d 192.168.1.3 -p tcp -dport 80 -m state --state ESTABLISHED,RELATED -j ACCEPT When I do so, I get error like : "multiple -d flags not allowed" ... Can someone tell me how to resolve this error... and do the entries that I want to add will serve my purpose ? Thanks!

    Read the article

  • Migrate active directory to Google apps for business

    - by dewnix
    I've got a problem migrating active directory to Gapps. I'm stuck on google apps directory sync (GADS) where it just gives the error "java.lang.NullPointerException" after testing the connection during the LDAP configuration step. I checked the logs and I've pretty much determined that port 389 (standard LDAP port) isn't listening on the exchange server. I've tried telneting to it (from another machine in the same network) with no luck but I can telnet to other ports, that i know are open, successfully. I know they're open because I used portqry and netstat to see them. I'm suspecting that the active directory isn't even installed/running on this machine because there's no active directory services at all running on it. There's no active directory services that say they're NOT running either though. Is it possible AD is installed somewhere else? does it have to be on a machine inside the same network? I found the domain controller and it's host name and when I telnet with port 389, it works however GADS still gives me the same exact error when I substitute that server in. Actually, no matter what ridiculous settings i put into GADS, i still get that same NullPointer error. If i could get some different error than that NullPointer, i'd call that a successful day.

    Read the article

  • Issues with returned mail sent to web-based email domains

    - by Beeder
    My company is having issues with returned mail that we send out to external domains. A few weeks ago we replaced a firewall and changed ISP providers and began subsequently having issues RECEIVING emails from external sources because we hadn't updated our new IPs in the DNS records. After making the necessary configuration changes and setting up SMTP forwarding over port 25 to our mail server, everything was working fine up until a few days ago when we started having mail sent out returned to us. We aren't having any trouble communicating internally (to recipients on our domain) but it seems we're having trouble with outbound messages to web-based email recipients. (@hotmail, @live, @yahoo, @gmail...etc) Currently we are running Server 2003 SP2 and exchange 2003. I'm very unfamiliar with configuring Exchange and could really use some help in narrowing down the possibilities. I did some research and am becoming suspicious of Sender ID being the culprit due to our recent IP address change and the likelihood that Sender ID is identifying us as a fake domain. Am I going in entirely the wrong direction? Any input or guidance would be infinitely appreciated. This is the message that is returned when an outbound message fails...this particular one was sent to my @live.com account for testing purposes... Your message did not reach some or all of the intended recipients. The following recipient(s) could not be reached: [email protected] on 5/17/2012 3:02 PM There was a SMTP communication problem with the recipient's email server. Please contact your system administrator. Unfortunately, messages from xx.x.xx.x weren't sent. Please contact your Internet service provider since part of their network is on our block list. I tried a reverse DNS lookup and found that we are set up as a Forward-confirmed reverse DNS. So do I just need to contact my ISP and have them correct their DNS records or is this something I can solve on our end??

    Read the article

  • Running Solr on VPS problem

    - by Camran
    I have a VPS with Ubuntu OS. I run solr om my local machine (windows xp laptop) just fine. I have configured Jetty, and Solr just the same way as on my computer, but on the server. I have also downloaded the JRE and installed it on the server. However, whenever I try to run the start.jar file, the PuTTY terminal shows a bunch of text but gets stuck. I could pase the text here but it is very long, so unless somebody wants to see it I wont. Also, I cant view the solr admin page at all. Does anybody have experience in this kind of problem? Maybe java isn't correctly installed? It is a VPS so maybe installation is different. Thanks UPDATE: These are the last lines from the terminal, in other words, this is where it stops every time: INFO: [] webapp=null path=null params={event=firstSearcher&q=static+firstSearcher+warming+query+from+solrconfig.xml} hits=0 status=0 QTime=9 May 28, 2010 8:58:42 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 28, 2010 8:58:42 PM org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener newSearcher INFO: Loading spell index for spellchecker: default May 28, 2010 8:58:42 PM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@63a721 main Also you should know that I installed jetty by just dragging the folders from my HD to the VPS server.

    Read the article

< Previous Page | 981 982 983 984 985 986 987 988 989 990 991 992  | Next Page >