Search Results

Search found 33454 results on 1339 pages for 'access token'.

Page 522/1339 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • Why are folders disappearing in Windows XP?

    - by XenoFoxx
    I am researching a problem for a friend, and unfortunatly do not have direct access to his computer. I've tried to gather as much information as possible and I have researched it on various websites. I've not found anyone having the same problem my friend is having. So here goes: He has a media server in his home running Microsoft Windows XP. It has 3 drives, 1 for the OS and 2 for mass storage. Not long ago he went to access one of the mass storage media drives and it was empty, except for a single folder. His first assumption was that his roommate had deleted everything on the drive (excluding the remaining folder). He then checked the properties of the drive and it was still saying that the hard drive was nearly full. I told him to check the recycling bin, thinking that whoever deleted them didn't clear them from recycling and that they were still taking up space on the drive. My friend said the recycling bin was empty. So we have a drive that the Windows file management system says is empty (again except for the remaining folder), but the properties of the drive say it's mostly full. Now it gets weirder My friend tried to create a new folder on this drive and it auto-named itself "New Folder(1)" which means that it recognizes there is already a "New Folder" in that directory. He tried to rename it to a name that he KNEW was there previsouly, and Windows wouldn't allow it because it was a duplicate folder name. SO now it seems the folders are there, but not displaying in Windows Explorer. Both of us have no idea why this is occuring, why the folders vanished, why the one remaining folder didn't vanish, or how to make them visable again. Anyone else ever experience this? I can get more details if needed.

    Read the article

  • error creating MS Exchange distribution list: Active directory response: 00000005: SecErr: DSID-031521D0

    - by BabakBani
    We've migrated a client from google apps to an MS Exchange 2010 SP2 on-premise setup. The setup /prepareAD went well, and the software was installed with the Administrator account. We've used the Exchange Management Console to setup mailboxes and had to google up the appropriate workarounds such as going into each users Advanced Security Settings and selecting "include inheritable permissions from this object's parents", and changing their logon-to from specific machines to "all computers" so that they can connect to Outlook Web Access, and in turn so their Outlook 2007-2010 clients can connect to Exchange. Sending and receiving emails are working well. Now that all this is in place, we can create Dynamic Distrubution Lists with no problem, but as soon as we try and create a DISTRIBUTION LIST, either in the EMC or the Exchange PowerShell, we get an error. As the error message in the powershell is more verbose, I include this if anyone can suggest how we remedy this: [PS] C:\Windows\system32new-DistributionGroup -Name 'projects' -SamAccountName 'projects' -Alias 'projects' Active Directory operation failed on DC.cppe.local. This error is not retriable. Additional information: Access is denied. Active directory response: 00000005: SecErr: DSID-031521D0, problem 4003 (INSUFF_ACCESS_RIGHTS), data 0 + CategoryInfo : NotSpecified: (0:Int32) [New-DistributionGroup], ADOperationException + FullyQualifiedErrorId : 1EA5CD3E,Microsoft.Exchange.Management.RecipientTasks.NewDistributionGroup

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Mysql can not resolve hostnames when checking privileges

    - by Fabio
    I'm going crazy to solve this. I have a mysql installation (on machine db.example.org) which doesn't resolve a given hostname. I gave privileges using hostnames i.e. GRANT USAGE ON *.* TO 'user'@'host1.example.org' IDENTIFIED BY PASSWORD 'secret' GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, INDEX ON `my_database`.* TO 'user'@'host1.example.org' However when I try to connect using mysql -u user -p -h db.example.org I obtain ERROR 1045 (28000): Access denied for user 'user'@'192.168.11.244' (using password: YES) I already checked for correct name resolution in the dns system: $ dig -x 192.168.11.244 ;; ANSWER SECTION: 244.11.168.192.in-addr.arpa. 68900 IN PTR host1.example.org. I've also checked for skip-name-resolve option in mysql variables in fact if I can access from another machine on the same subnet using hostname privileges. The only difference is that host1.example.org and db.example.org point the same ip on the same machine i.e. both db.example.org and host1.example.org have ip 192.168.11.244. In this way all the applications using that database can use the name db.example.org and we can move the data on other hosts (if needed) just by changing the dns record, leaving the application code unchanged. What should I do to solve this or at least to understand what's happening?

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • Apache Proxy Pass and Web Sockets

    - by James
    I'm using Apache with the mod_proxy module to reverse proxy my Node.js application through to port 80, so that we can access it as an internal application. I have a file in sites-enabled which contains this: VirtualHost *:80> DocumentRoot /var/www/internal/ ServerName internal ServerAlias internal <Directory /var/www/internal/public/> Options All AllowOverride All Order allow,deny Allow from all </Directory> ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8080/ retry=0 ProxyPassReverse / http://localhost:8080/ ProxyPreserveHost on ProxyTimeout 1200 LogLevel debug AllowEncodedSlashes on </VirtualHost> As I said, our application is written in Node.js and we're using socket.io to make use of web-sockets, as our application also contains realtime elements to it. The problem is, mod_proxy doesn't seem to handle web sockets and we get errors when trying to use them: WebSocket connection to 'ws://bloot/socket.io/1/websocket/nHtTh6ZwQjSXlmI7UMua' failed: Unexpected response code: 502 How can we fix this issue and keep sockets working, as the only way we can get it working currently is to access the site via ip:port which we don't want to do. Also, as a side question, how can I get ErrorDocument to work properly? Our error files are stored in /var/www/internal/public/error/ but they seem to get put through the proxy too?

    Read the article

  • What is the minimal steps to setup a client-server network using Windows Server 2008 R2 standard?

    - by Motivated Student
    Background I have One computer server with Win Server 2008 R2 standard installed but it has not been configured. This server has 2 LAN adapters. One adapter is connected to ISP and the other one connected to HUB/Switch. Other computers working as clients are connected to the same HUB/Switch to which the server is connected. IP Printers, IP scanners, IP camera are also connected to the same HUB/Switch. Note: I am a newbie. I only know how to plug RJ-45 sockets and assembly computer peripherals. I have no prior experience in Windows Server at all. Please teach me from the newbie's point of view. Objective I want to establish the following: Each client can access the internet, printers, scanners after it has been successfully authenticated by the server. Unauthenticated clients cannot access the internet, printers, etc. The server hosts a local site. Clients can browse internally using a private domain www.company.com. If the same domain name has been used by other on the internet, my private domain must override the public domain.

    Read the article

  • Install mod_perl2 on Apache 2.2.14 (Ubuntu10.04)

    - by MICADO
    I have installed via synaptic package ibapache2-mod-perl2. I tried this line in httpd.conf: "LoadModule perl_module modules/mod_perl.so" Apache tells me when I reload the server : "[warn] module perl_module is already loaded, skipping". Well ok, but when i try to look in the browser to a repertory i don't have access .Apache send me the error : Forbidden You don't have permission to access /cgi-bin/ on this server. Apache/2.2.14 (Ubuntu) Server at 192.168.0.10 Port 90 But this should show modperl is installed and that's not the case... I would like my virtual host that follows run with mod_perl2 <VirtualHost v1:80> ServerAdmin webmaster@localhost ServerName v1 DocumentRoot /var/www/v1 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/v1/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www/v1/cgi-bin/ <Directory "/var/www/v1/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> I'd like to know how to configure mod_perl2. Do i have to change something in the apache configuration file to make my cgi repertory works with mod_perl2? Thanks to any help!

    Read the article

  • scanner no longer working windows 7

    - by Sackling
    I have a windows 7 64 pro machine that for some reason no longer works with a brother all in one printer/scanner, when I try to scan. This happened seemingly out of nowhere. Printing still works. When I try to scan from photoshop, photoshop crashes. When I try to scan from device manager (which takes a really long time to access the printer) I get an error saying: "operation could not be completed. (error 0x00000015). the device is not ready. I tried uninstalling/reinstalling the printer and drivers. I have also run the brother driver cleaner and then reinstalled and get the same results every time. I had access to another HP all in one. And very strangely When I setup this new printer I have a similar issue that happens. I am able to print but when I try to scan from the HP tool nothing pops up. So my thoughts are it is something to do with the twain driver? But I don't know what to do or check. Any help is appreciated.

    Read the article

  • How can I enable PHP5 for a site? Having problems with every single method.

    - by John Stephens
    I'm working on a client site that is hosted on someone's DIY Debian Linux server [Apache/1.3.33 (Debian GNU/Linux)], and I'm trying to install a script that requires PHP5. By default, the server parses .php files with PHP 4.3.10-22, which is configured at /etc/php4/apache/php.ini, according to phpinfo(). On the server I can see a config directory for PHP5 adjacent to the PHP4 directory: /etc/php5.0/apache2/php.ini. I have tried multiple methods to enable PHP5 for the document root where the site's files are hosted, including all available methods mentioned here. By far, the most common suggestion I've found is to add one or both of the following lines to the site's .htaccess file: AddHandler application/x-httpd-php5 .php AddType application/x-httpd-php5 .php Trouble is, when either or both of those lines are present, the site forces my browser to download any .php files requested, without parsing the PHP at all. All of the other methods mentioned in the above article cause a 500 Internal Server Error. There is no hosting control panel I can access in a browser to enable PHP5 for the site, but I do have shell access. When I asked the server administrator about this issue, he encouraged me to search for the answer on Google. Where could I begin to troubleshoot this issue? Are there ways to test or verify the server's specific PHP5 installation and configuration, using the command line or some other method? Do you have other suggestions to enable PHP5?

    Read the article

  • What could cause a huge packet loss in Ubuntu 9.10, for both wired and wireless?

    - by xzenox
    I was previously using 9.04 fine (and in fact, I am posting this from my old 9.04 live cd). I tested the following install steps in a virtualbox vm prior to following the sames ones to upgrade my laptop: Download/burn ubuntu minimal cd (12mb one) Install ubuntu minimal sudo apt-get update sudo apt-get upgrade sudo apt-get ubuntu-desktop ubuntu-standard In the VM worked fine and I found myself with a working 9.10 ubuntu, network worked fine and I was able to test my backups and DropBox without a hitch (host was 9.04). When I followed the same steps on my laptop, everything worked up to after 9.10 being installed and working. As far as I can tell, everything besides eth0/wireless works. For some reason, I am unable to access the internet. Ping reports that over 99% of packets get lost (over an hour or so of pinging). This means for example that if I try hard enough, I can load a webpage but only at the cost of much patience... This happens both for a wired and wireless connection to my wrt310n (updated with latest firmware). At first I thought that it could be related to the ipv6 issues ppl have been experiencing however even after disabling ipv6 at the kernel level (through grub), I still get the issue. I do not think this is related to DNS issues or the likes since even when I ping my ISP's gateway IP, I have the same amount of packet loss. No DNS resolving should be required there. Access to my router works peachy with no packet loss there. I've tried different MTU values but to no avail. Note that this issue affects every web-enabled application: firefox, ping, synaptic, etc. The same hardware/router combo works with 9.04 but not with 9.10. In fact, when I did: sudo apt-get ubuntu-desktop ubuntu-standard after 9.10 minimal was installed, it downloaded over 400mb of packages without a hitch so my guess is that one of those packages either in ubuntu-desktop or ubuntu-standard is causing havok. Thoughts?

    Read the article

  • configure Squid3 proxy server on Ubuntu with caching and logging

    - by Panshul
    I have a ubuntu 11.10 machine. Installed Squid3. When i configure the squid as http_access allow all, everything works fine. my current configuration mostly default is as follows: 2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/09/10 13:19:57| Processing: acl manager proto cache_object 2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1 2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/09/10 13:19:57| Processing: acl SSL_ports port 443 2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http 2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp 2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https 2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher 2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais 2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt 2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http 2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker 2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http 2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT 2012/09/10 13:19:57| Processing: http_access allow manager localhost 2012/09/10 13:19:57| Processing: http_access deny manager 2012/09/10 13:19:57| Processing: http_access deny !Safe_ports 2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports 2012/09/10 13:19:57| Processing: http_access allow localhost 2012/09/10 13:19:57| Processing: http_access deny all 2012/09/10 13:19:57| Processing: http_port 3128 2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3 2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320 2012/09/10 13:19:57| Processing: http_access allow all 2012/09/10 13:19:57| Processing: cache_mem 512 MB 2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru 2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3 The problem starts when I enable the following line: access_log /home/panshul/squidCache/log/access.log I start to get proxy server is refusing connections error in the browser. on commenting out the above line in my config, things go back to normal. The second problem starts when i add the following line to my config: cache_dir ufs /home/panshul/squidCache/cache 100 16 256 The squid server fails to start. Any suggestions what am I missing in the config. Please help.!!

    Read the article

  • Network Explorer Intermittently Fails to Display all Computers in Work Group

    - by graf_ignotiev
    I run a small computer lab of 10 computers and occasionally, when using the network explorer (a.k.a Network Browser) some or all of the remote computers will fail to appear. If I try to access a remote computer by its name I get an unspecified error (code 0x80004005), but I am still able to access it with the computer's IP address. The strangest part is that the problem will inexplicably go away after waiting awhile. Each computer is running Windows 7 x64 Enterprise and has identical hardware, software and configuration. They are all on the same subnet and in the same workgroup. I've spent days researching the problem and have tried the following solutions: Updated the BIOS, chipset and network adapter drivers Changed Power Settings in Network Adapter Properties so that the computer will not turn it off Disabled the Computer Browser service Changed the DHCP node type to broadcast Reviewed the Event Viewer logs Steps 3 and 4 have seemed to help the problem a little bit, but not completely. I'm beginning to suspect that the problem might lie with our router which is a ZyXEL ZyWALL 2WG, as the packets sent by Network Discovery may not be returning in time, but I wanted to get some perspective in the issue before I went any further.

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • lxc bandwidth control using tc

    - by kumar
    I am trying to restrict bandwidth inside my containers. I have tried using the following commands , But I think it is not getting effective. cd /sys/fs/cgroup/net_cls/ echo 0x1001 > A/net_cls.classid # 10:1 echo 0x1002 > B/net_cls.classid # 10:2 tc qdisc add dev eth0 root \ handle 10: htb tc class add dev eth0 parent 10: \ classid 10:1 htb rate 40mbit tc class add dev eth0 parent 10: \ classid 10:2 htb rate 30mbit tc filter add dev eth0 parent 10: \ protocol ip prio 10 \ handle 1: cgroup Here A and B are containers created with this command. lxc-execute -n A -f configfile /bin/bash lxc-execute -n B -f configfile /bin/bash Whereas configfile contains only this entry: lxc.utsname = test_lxc AFter starting the container , I have started vsftpd inside container A and try to access the files using the ftp client from another machine. Then I killed vsftpd in container A and started vsftpd in container B and try to access the files using ftp client from another machine. I cannot observe any difference in performance, for that matter it is nowhere nearer to 40mbit/30mbit. Please correct me whether anything wrong here.

    Read the article

  • Use shared 404 page for virtual hosts in Nginx

    - by Choy
    I'd like to have a shared 404 page to use across my virtual hosts. The following is my setup. Two sites, each with their own config file in /sites-available/ and /sites-enabled/ www.foo.com bar.foo.com The www directory is set up as: www/ foo.com/ foo.com/index.html bar.foo.com/ bar.foo.com/index.html shared/ shared/404.html Both config files in /sites-available are the same except for the root and server name: root /var/www/bar.foo.com; index index.html index.htm index.php; server_name bar.foo.com; location / { try_files $uri $uri/ /index.php; } error_page 404 /404.html; location = /404.html { root /var/www/shared; } I've tried the above code and also tried setting error_page 404 /var/www/shared/404.html (without the following location block). I've also double checked to make sure my permissions are set to 775 for all folders and files in www. When I try to access a non-existent page, Nginx serves the respective index.php of the virtual host I'm trying to access. Can anyone point out what I'm doing wrong? Thanks!

    Read the article

  • How to find out which program a default beep is coming from in Windows 7?

    - by leeand00
    There's a "default beep" (as defined in System Sounds) that emanates from my computer every so often. It kind of goes like this (where each number is a "default beep" sound): 1, 2, 3, 4, 5. So there's a distinct pattern to it. I thought I had this figured out a way to figure out what was happening by going to Control Panel-Ease of Access-Ease of Access Center-Replace sounds with visual cues But that just isn't the case. Whichever window I click on, that one displays the visual queue when this happens. It's driving me crazy, and I can't figure out which program is causing this. Update: it appears to only happen on one user profile on the computer...does that help? Update 2: Discovered that this sound was coming from a utility on my laptop called ASUS NB Probe; I'm certain that it is emanating from this program because the error message displayed by it changes in sync with the sound playing. Apparently the S.M.A.R.T. feature of my hard drive was reporting an issue. It displays the issue for a brief second and then makes it disappear, I'll have to keep watching it to see what it says, but I believe is says something about a read. I have an external hdd connected with eSATA to a container of sorts (BlacX) that you can plug two hdds into. I have one hdd attached and it's a Western Digital WD1001FALS - 00E8B0 Thanks again! Now I'm off to go around the Internet and report on this...since I posted it so many places!

    Read the article

  • Windows 2008 R2 AWS CloudFormation Elastic beanstalk configuration

    - by Webmonger
    I'm looking for some configuration advice. I have a need for a load balanced windows environment with shared media across all instances that are hosting the app. The best explanation i can give is that there will be multiple Windows 2008 server with IIS hosting the app going through an ELB to load balance. Users must be able to upload content (images, video etc...) to the site that will be hosted. When a user uploads media it needs to be kept on a shared location so all windows IIS instances can access the files, I can't host the files on S3 because of the app architecture so they need to be in a place where all IIS server will have access. In addition I need to run an update each IIS server instance that updates a local memory cache when SQL data is updated. I was thinking of a configuration like this: [ELB] - [Win 2008 IIS (multiple servers)] - [Win 2008 File & SQL Server(possibly RDS?)] Does this configuration make sense? If not could you provide an idea of how I should configure it. Thanks in advance

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • How have multiple web servers and IPs on the same physical network

    - by jsigned
    I do web development out of a small office and need to have multiple physical and virtual servers that can be accessed from the internet. I also have a number of devices (computers, laptops, tablets, printers, etc) that need connections as well. I have gotten a subnet of 8 IP's from my ISP and while that is adequate for the web servers its far too small for everything that needs access to the network. My router is an ASUS RT-N16 running DD-WRT. I'm just smart enough about this routing topic to be dangerous, think 2 year old with a magic marker. I would like to keep my internal network NAT'ed on the 192.168.x.x network and route the 68.69.x.x 255.255.255.248 traffic directly to the servers. The physical network consists of the 4 port DD-WRT router and an unmanaged gig switch. I have a fiber connection to the office that works as an Ethernet port. In other words I can plug my laptop directly into it and have access to the internet. There is no login or password and the router is setup to get DHCP from the ISP, and to provide DHCP addresses for the internal network. What I've done so far is google and try different configurations with little success. In the end I decided I didn't even know how to ask the questions needed. My questions are: Is this the best way to configure the network? How do you do it? VLANs? Multiple routers? I've never had to configure a router using anything more than the GUI so if this is command line stuff be gentle.

    Read the article

  • Java Swing over Remote Desktop - Strange, weird GUI squashing

    - by ADTC
    I thought this question fits SuperUser more than StackOverflow because it's not about actual Java programming, though programmers might be more likely to encounter the problem. Anyway, let me start of with some stats before I ask the actual question: Laptop: Windows 7 x32 Screen resolution 1024 x 768; Nvidia GeForce Go 6200 Connected to desktop via ad-hoc wireless network Access internet via desktop Desktop: Windows 7 x64 Screen resolution 1920 x 1080 Connected to laptop via ad-hoc wireless network Access internet via cable modem I'm connecting to my laptop via Remote Desktop from my desktop to take advantage of the large screen. I'm doing programming on my laptop (for portability reasons). Everything else runs smooth and fast over Remote Desktop as both computers are connected directly over the ad-hoc wireless. The only problem is this: Java Swing apps don't display the GUI properly. I acquired a Java Swing application and I'm debugging it in Eclipse. Here's what I got when I ran the app: Apparently there doesn't seem to be anything wrong with the GUI application I'm debugging, because the Java Control Panel exhibits the same problem. I've searched high and low in Google about this; the closest I came to a solution is this. But sadly, the use of -Dsun.java2d.nodraw=true has no effect at all. This only happens over Remote Desktop. I have tried locally and the GUI apps display properly. This isn't a dealbreaker for me as I can stop using Remote Desktop when developing Java Swing apps. However, I would like to know if anyone has encountered this and found any solution. PS: All software involved (Eclipse, Java JRE, etc.) are latest versions.

    Read the article

  • IIS6 intranet site using integrated authentication fails to load when accessed externally

    - by maik
    I've developed a couple of internal sites for my organization that use integrated authentication. Ultimately we want these sites to be accessible externally to users with domain-joined computers. The sites work as expected on domain computers while on the internal network. The problem comes when I take my laptop home and try to access those sites. IIS only has integrated authentication enabled for the two sites. When I browse to the site using IE8 I get a username/password prompt asking for domain credentials. I can put those in and it will work, but the goal is to use the cached token for integrated authentication. Next I reasoned that IE wouldn't response to an integrated auth request (is NTLM the right term for this?) unless the site was trusted. I tried adding the site to Trusted Sites but I get the same behavior as the before. I then added the site to Local Intranet sites and that is where things get weird. I get a generic error page from IE, no error code or anything. Just for funsies I loaded up Firefox (which I had previously set up to use integrated authentication) and I added this new site to network.automatic-ntlm-auth.trusted-uris. Much to my surprise I was able to load the pages up with no problem at all and saw exactly what I was expecting (including verification that the integrated authentication worked). My mind is a bit boggled at the moment as I'm not really sure where to go from here. I was hoping some of you may be able to provide some insight.

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >