Search Results

Search found 14546 results on 582 pages for 'mod authz host'.

Page 491/582 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • VirtualHost on WAMPSERVER not working

    - by Martin C
    I currently have WAMPSERVER 2.2 set up on my PC. I'm trying to set up a new host called pplocal.local I made the changes in httpd.conf to uncomment this: Include conf/extra/httpd-vhosts.conf Then, I edited httpd-vhosts.cong and I added the following: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/" ServerName localhost </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/pp/" ServerName pplocal.local <Directory "E:/wamp2/www/pp/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> CustomLog "E:\wamp2\logs\pplocal-access.log" common ErrorLog "E:\wamp2\logs\pplocal-error.log" Im my windows 'hosts' file I added: 127.0.0.1 localhost 127.0.0.1 pplocal.local Then, I restarted apache. If I type localhost in my browser I get the files at E:/wamp2/www/ If I type pplocal.local in my browser I get the files at E:/wam2/www/ instead of those at E:/wamp2/www/pp/ I have followed several tutorials and can't see what I'm doing wrong. I'm new to editing the files associated with apache so any advice is appreciated. Thanks

    Read the article

  • Msg 10054, Level 20, State 0, Line 0 Error when altering a stored procedure to add a couple of curso

    - by doug_w
    We have a home-rolled backup stored procedure that uses xp_cmdshell to create and clean up database backups. We have an instance that is 2005 sp3 that we are trying to deploy this script to. I am at a bit of a loss for why it is not working. When I execute the create it runs for about 30 seconds and yields the following error: Msg 10054, Level 20, State 0, Line 0 A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) In my tinkering I discovered that by removing the cursors that actually do the work it will allow me to create the stored procedure (not very helpful for me though). If I add the cursors back in using an alter the error returns. I would be curious if someone has experienced this problem and knows of a solution or work around. I am not opposed to posting the source, it is just lengthy. Things I have checked: Error Logs No dump files in the log directory Thanks in advance for the help.

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • How can I get VirtualBox to run at 1366x768?

    - by Joe White
    I'm trying to run Windows 8 in VirtualBox. My laptop's display is exactly 1366x768. Windows 8 disables some of its features if the resolution is less than 1366x768, so I need to run the guest OS fullscreen. The problem is, VirtualBox refuses to run the guest at 1366x768. When VirtualBox is "fullscreen", the guest is only 1360x768 -- six pixels too narrow. So there's a three-pixel black bar at the left and right sides of the display. This user had the same problem, but the accepted answer is "install the Guest Additions", which I've already done; that got me to 1360, but not to 1366. According to the VirtualBox ticket tracker, there used to be a bug where the guest's screen width would be rounded down to the nearest multiple of 8, but they claim to have fixed the bug in version 3.2.12. I'm using version 4.1.18 and seeing the same problem they claim to have fixed, so either they broke it again, they were wrong about ever having fixed it, or my problem is something else entirely. This answer suggested giving the VM 128MB of video memory, and claimed no problems getting 1366x768 afterward. When I created the VM, its display memory was already defaulted to 128 MB. I tried increasing it to 256MB, but with no effect: the guest is still six pixels too narrow. My host OS is Windows 7 64-bit, and I'm running VirtualBox 4.1.18. How can I get VirtualBox to run my guest OS fullscreen at my display's native resolution of 1366x768?

    Read the article

  • Configure one IIS site to handle two separate SSL certificates using external Load Balancing or SSL Acceleration Servers

    - by bmccleary
    I have one web application on our server that needs to be referenced by two different domain names, both of which have their own SSL certificates. The application is exactly the same for both domains, but we have to keep the two domain names for legal reasons. The problem is that, since both domains need to have their own SSL certificate, that inside of our IIS 7.5 configuration we have to have two separate IIS applications (both pointing to the same physical location) with their own unique IP address and SSL certificate installed. Now, I know that, due to the nature of SSL communications, that this is by design and that you can't assign more than one SSL certificate per IP address and domain name. My question is… is there any way around this limitation and keep one web application in IIS and have it service two SSL certificates based on host name? I know that with the basic IIS configuration that this is not possible, but I was thinking that with some sort of combination of external load balancing and/or SSL acceleration servers/services that we could have these servers process the SSL request and leave IIS clean to have one single application. I am not familiar at all with these technologies, hence the reason I am asking if it is theoretically possible. If not, does anyone else know how to achieve this?

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • Is this a solution for having multiple SSL certificates on the same IP

    - by Saif Bechan
    I am running CentOS running on a VPS. I read some guides on having multiple SSL certificates on the same system, but I can not get the basics to work. The guide I got that makes the most sense to me is the doing the following. In CentOS I can make virtual NIC's. So I made 2 virtual NIC's to start with. 192.168.10.1, 192.168.10.2. Now I work in ISP manager Pro, so this is listening on my primary ip 1.1.1.1 For each website I have them listening on 192.168.10.1:80, 192.168.10.1:443 In the hosts file I made the following 2 entries 192.168.10.1 1st.com 192.168.10.2 2nd.com Now the strange thing is that when I browser to 1st.com I do not get the website located at 192.168.10.1, I get the website located at my prim IP 1.1.1.1 Should I do something like forwarding or routing for this setup to work? And the basic question: Will this setup even work? Are the SSL certificates based on the IP adress, or are the based on the host name, 1st.com and 2nd.com.

    Read the article

  • IP-dependent local port-forwarding on Linux

    - by chronos
    I have configured my server's sshd to listen on a non-standard port 42. However, at work I am behind a firewall/proxy, which only allow outgoing connections to ports 21, 22, 80 and 443. Consequently, I cannot ssh to my server from work, which is bad. I do not want to return sshd to port 22. The idea is this: on my server, locally forward port 22 to port 42 if source IP is matching the external IP of my work's network. For clarity, let us assume that my server's IP is 169.1.1.1 (on eth1), and my work external IP is 169.250.250.250. For all IPs different from 169.250.250.250, my server should respond with an expected 'connection refused', as it does for a non-listening port. I'm very new to iptables. I have briefly looked through the long iptables manual and these related / relevant questions: http://serverfault.com/questions/57872/iptables-question-forwarding-port-x-to-an-ssh-port-of-different-machine-on-the-n http://serverfault.com/questions/140622/how-can-i-port-forward-with-iptables However, those questions deal with more complicated several-host scenarios, and it is not clear to me which tables and chains I should use for local port-forwarding, and if I should have 2 rules (for "question" and "answer" packets), or only 1 rule for "question" packets. So far I have only enabled forwarding via sysctl. I will start testing solutions tomorrow, and will appreciate pointers or maybe case-specific examples for implementing my simple scenario. Is the draft solution below correct? iptables -A INPUT [-m state] [-i eth1] --source 169.250.250.250 -p tcp --destination 169.1.1.1:42 --dport 22 --state NEW,ESTABLISHED,RELATED -j ACCEPT Should I use the mangle table instead of filter? And/or FORWARD chain instead of INPUT?

    Read the article

  • Postfix misconfigured? 550 Sender rejected from recieving server

    - by wnstnsmth
    We use Postfix on our CentOS 6 machine, having the following configuration. We use PHP's mail() function to send rudimentary password reset emails, but there is a problem. As you will see, mydomain and myhostname is correctly set, afaik. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ***.ch myhostname = test.***.ch newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 Now this is the stuff that is in the /var/log/maillog of Postfix upon sending an email to ***.***@***.ch, with ***.ch being the same domain our sending server test.***.ch is on: Dec 13 16:55:06 R12X0210 postfix/pickup[6831]: E6D6311406AB: uid=48 from=<apache> Dec 13 16:55:06 R12X0210 postfix/cleanup[6839]: E6D6311406AB: message-id=<20121213155506.E6D6311406AB@test.***.ch> Dec 13 16:55:07 R12X0210 postfix/qmgr[6832]: E6D6311406AB: from=<apache@test.***.ch>, size=1276, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/smtp[6841]: E6D6311406AB: to=<***.***@***.ch>, relay=mail.***.ch[**.**.249.3]:25, delay=46, delays=0.18/0/21/24, dsn=5.0.0, status=bounced (host mail.***.ch[**.**.249.3] said: 550 Sender Rejected (in reply to RCPT TO command)) Dec 13 16:55:52 R12X0210 postfix/cleanup[6839]: 8562C11406AC: message-id=<20121213155552.8562C11406AC@test.***.ch> Dec 13 16:55:52 R12X0210 postfix/bounce[6848]: E6D6311406AB: sender non-delivery notification: 8562C11406AC Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: from=<>, size=3065, nrcpt=1 (queue active) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: E6D6311406AB: removed Dec 13 16:55:52 R12X0210 postfix/local[6850]: 8562C11406AC: to=<root@test.***.ch>, orig_to=<apache@test.***.ch>, relay=local, delay=0.13, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Dec 13 16:55:52 R12X0210 postfix/qmgr[6832]: 8562C11406AC: removed So the receiving server rejects the sender (line 4 of log output). We have tested it with one other recipient and it worked, so this problem might be completely unrelated to our settings, but related to the recipient. Still, with this question, I want to make sure we're not making an obvious misconfiguration on our side.

    Read the article

  • iptables rules to allow HTTP traffic to one domain only

    - by Emily
    Hi everyone, I need to configure my machine as to allow HTTP traffic to/from serverfault.com only. All other websites, services ports are not accessible. I came up with these iptables rules: #drop everything iptables -P INPUT DROP iptables -P OUTPUT DROP #Now, allow connection to website serverfault.com on port 80 iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT #allow loopback iptables -I INPUT 1 -i lo -j ACCEPT It doesn't work quite well: After I drop everything, and move on to rule 3: iptables -A OUTPUT -p tcp -d serverfault.com --dport 80 -j ACCEPT I get this error: iptables v1.4.4: host/network `serverfault.com' not found Try `iptables -h' or 'iptables --help' for more information. Do you think it is related to DNS? Should I allow it as well? Or should I just put IP addresses in the rules? Do you think what I'm trying to do could be achieved with simpler rules? How? I would appreciate any help or hints on this. Thanks a lot!

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • Exchange SBS 2003 server stopped receiving mail over the weekend, senders getting "Relay access deni

    - by Charlie W.
    Firstly, I should say that I know my way around Windows very well, I don't really know the first thing about Exchange. I am trying to support a user that is running an SBS2003 Server with Exchange. Over the weekend, everyone sending something to any of his addresses gets an error message like the following: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 5.7.1 <[email protected]>: Relay access denied (state 14). ----- Original message ----- Received: by 10.114.18.7 with SMTP id 7mr5572745war.127.1275423472120; Tue, 01 Jun 2010 13:17:52 -0700 (PDT) MIME-Version: 1.0 Sender: [email protected] Received: by 10.143.10.15 with HTTP; Tue, 1 Jun 2010 13:17:32 -0700 (PDT) From: My Name <[email protected]> Date: Tue, 1 Jun 2010 15:17:32 -0500 X-Google-Sender-Auth: XiPrP8Em_6Eb94EH9m84nJVGvCY Message-ID: <[email protected]> Subject: TEST To: Client <[email protected]> Content-Type: multipart/alternative; boundary=001636b1484ffe72470487fdaa5b There are a host of errors in the Application log, but nothing that leaps out at me as being obvious. But then again, I don't really know what I'm looking for. Any help would be greatly appreciated.

    Read the article

  • How to prevent remote hosts from delivering mail to Postfix with spoofed From header?

    - by Hongli Lai
    I have a host, let's call it foo.com, on which I'm running Postfix on Debian. Postfix is currently configured to do these things: All mail with @foo.com as recipient is handled by this Postfix server. It forwards all such mail to my Gmail account. The firewall thus allows port 25. All mail with another domain as recipient is rejected. SPF records have been set up for the foo.com domain, saying that foo.com is the sole origin of all mail from @foo.com. Applications running on foo.com can connect to localhost:25 to deliver mail, with [email protected] as sender. However I recently noticed that some spammers are able to send spam to me while passing the SPF checks. Upon further inspection, it looks like they connect to my Postfix server and then say HELO bar.com MAIL FROM:<[email protected]> <---- this! RCPT TO:<[email protected]> DATA From: "Buy Viagra" <[email protected]> <--- and this! ... How do I prevent this? I only want applications running on localhost to be able to say MAIL FROM:<[email protected]>. Here's my current config (main.cf): https://gist.github.com/1283647

    Read the article

  • Oracle logical standby fails with ORA-01919

    - by DCookie
    I have an Oracle logical standby database being managed via data guard. Just this morning the redo apply process began failing with an ORA-01919 error, indicating one of our application roles did not exist. However, I can see the role on both primary and standby databases. We also have a physical standby that has long since applied the redo where this is happening on the logical, without issue. I have opened an SR with Oracle. I was wondering if anyone out there has seen this before. I guess I should mention: Oracle 10.2.0.4, Win2003 Server SP2. UPDATE: So far, Oracle Support has not provided an answer. I thought I'd post here what I have learned so far. It appears that a grant of DBA on the primary host to a role works fine for users granted the role. It does not work on the logical standby. IOW: create role TEST; grant dba to TEST; grant TEST to auser; connect auser set role TEST; grant <existing role> to <existing user>; This works on the primary instance but fails on the logical. A workaround appears to be to grant each role on the primary to the role TEST with admin option in the logical: grant <existing role> to TEST with admin option; <== do this on the logical standby Then the command works on the logical standby.

    Read the article

  • EFS Remote Encryption

    - by Apoulet
    We have been trying to setup EFS across our domain. Unfortunately Reading/Writing file over network share does not work, we get an "Access Denied" error. Another worrying fact is that I managed to get it working for 1 machine but no other would work. The machines are all Windows 2008R2, running as VM under ESXi host. According to: http://technet.microsoft.com/en-us/library/bb457116.aspx#EHAA We setup the involved machine to be trusted for delegation The user are not restricted and can be trusted for delegation. The users have logged-in on both side and can read/write encrypted files without issues locally. I enabled Kerberos logging in the registry and this is the relevant logs that I get on the machine that has the encrypted files. In order for all certificate that the user possess (Only Key Name changes): Event ID 5058: Audit Success, "Other System Events" Key file operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Key File Operation Information: File Path: C:\Users\{MyID}\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-4585646465656-260371901-2912106767-1207\66099999999991e891f187e791277da03d_dfe9ecd8-31c4-4b0f-9b57-6fd3cab90760 Operation: Read persisted key from file. Return Code: 0x0[/code] Event ID 5061: Audit Faillure, "System Intergrity" [code]Cryptographic operation. Subject: Security ID: {MyDOMAIN}\{MyID} Account Name: {MyID} Account Domain: {MyDOMAIN} Logon ID: 0xbXXXXXXX Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: RSA Key Name: {CE885431-9B4F-47C2-8415-2D766B999999} Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x8009000b Could this be related to this error from the CryptAcquireContext function NTE_BAD_KEY_STATE 0x8009000BL The user password has changed since the private keys were encrypted. The problem is that the users I using at the moment can not change their password.

    Read the article

  • Enabling NAT forwarding using a second WAN interface and a second gateway on ubuntu

    - by nixnotwin
    I have 3 interfaces: eth0 192.168.0.50/24 eth1 10.0.0.200/24 eth2 225.228.123.211 The default gateway is 192.168.0.1 which I want to keep as it is in the changes I want to make. I want to masquerade eth1 10.0.0.200/24 and enable NAT forwarding to eth2. So I have done this: ip route add 225.228.123.208/29 dev eth2 src 225.228.123.211 table t1 ip route add default via 225.228.123.209 dev eth2 table t1 ip rule add from 225.228.123.211 table t1 ip rule add to 225.228.123.211 table t1 Now I can receive ping replies from any internet host if I did: ping -I eth2 8.8.8.8 To enable NAT forwarding I did this: sudo iptables -A FORWARD -o eth2 -i eth1 -s 10.0.0.0/24 -m conntrack --ctstate NEW -j ACCEPT sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT sudo iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE But it isn't working. To test I used a client pc and put it on 10.0.0.0/24 network and gateway was set as 10.0.0.200. I want to have 192.168.0.1 as default gateway. And the traffic that comes in via eth1 10.0.0.200/24 should be forwarded to eth2 225.228.123.211. I have enabled forwarding on ubuntua also.

    Read the article

  • Which events specifically cause Windows 2008 to mark a SAN volume offline?

    - by Jeremy
    I am searching for specific criteria/events that will cause Windows 2008 to mark a SAN volume as offline in disk management, even though it is connected to that SAN volume via FC or iSCSI. Microsoft states that "A dynamic disk may become Offline if it is corrupted or intermittently unavailable. A dynamic disk may also become Offline if you attempt to import a foreign (dynamic) disk and the import fails. An error icon appears on the Offline disk. Only dynamic disks display the Missing or Offline status." I am specifically wondering if, on the SAN, changing the path to the disk (such as the disk being presented to the host via a different iSCSI target IQN or a different LUN #) would cause a volume to be offlined in disk management. Thanks! Edit: I have already found two reasons why a disk might be set offline, disk signature collisions and the SAN disk policy. Bounty would be awarded to someone who can find further documented reasons related to changes in the volume's path. Disk signature collisions: http://blogs.technet.com/b/markrussinovich/archive/2011/11/08/3463572.aspx SAN disk policy: http://jeffwouters.nl/index.php/2011/06/disk-offline-with-error-the-disk-is-offline-because-of-a-policy-set-by-an-administrator/

    Read the article

  • EC2 instance is blocking all outbound connections, how to diagnose/fix?

    - by Fraggle
    My EC2 instance is blocking all outbound connections. wget http://www.google.com ==> Hangs ping google.com ==>hangs ssh user@anyserver ==>hangs I ran : sudo iptables -F to eliminate all rules to no avail AWS Management console shows Security Group for that instance has Inbound rule allowing ssh and port 80. Can't find anything about Outbound rules there. Rebooted instance, no change. If anyone knows how to diagnose or fix, please help. Adding info: [ec2-user@ip-10-112-62-73 ~]$ ifconfig eth0 Link encap:Ethernet HWaddr 12:31:3D:06:31:BB inet addr:10.112.62.73 Bcast:10.112.63.255 Mask:255.255.254.0 inet6 addr: fe80::1031:3dff:fe06:31bb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1933 errors:0 dropped:0 overruns:0 frame:0 TX packets:1764 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:164075 (160.2 KiB) TX bytes:343256 (335.2 KiB) Interrupt:9 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:672 (672.0 b) TX bytes:672 (672.0 b) [ec2-user@ip-10-112-62-73 ~]$ ip route show 10.112.62.0/23 dev eth0 proto kernel scope link src 10.112.62.73 default via 10.112.62.1 dev eth0

    Read the article

  • PhpMyAdmin import/export - strange character encoding issues.

    - by John Hunt
    Hello, I'm migrating a site to a new host, and there are a couple of databases on there. There's no SSH access so I'm stuck with phpmyadmin. The issue is that certain characters (namely just whitespace) seems to being corrupt on the new site (same html, and apache doesn't seem to be messing with any encodings - you can see the strange characters have changed when I use less on my linux machine after downloading a table dump from both servers.) The issue isn't as bad if I import into the new database as utf-8 - whitespace characters only have one funny A type symbol instead of two. I've been trying various combinations of character encoding etc to no avail. Exporting from: phpMyAdmin 2.6.2 MySQL 4.1.20 MySQL connection collation: utf8_general_ci MySQL charset: UTF-8 Unicode (utf8) Collation on tables and their fields is: latin1_swedish_ci Importing to: phpMyAdmin - 2.11.9.2 MySQL client version: 5.0.45 MySQL charset: UTF-8 Unicode (utf8) MySQL connection collation: utf8_general_ci The import sql has this kind of thing in it: ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=192 ; I get the impression this is actually a bug or something with mysqldump as nothing seems to work.. does anyone have any insight into this? Cheers, John.

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • Why can't I bind to 127.0.0.1 on Mac OS X?

    - by Noah Lavine
    Hello, I'm trying to set up a simple web server on Mac OS X, and I keep getting an error when I run bind. Here's what I'm running (this transcript uses GNU Guile, but just as a convenient interface to posix). (define addr (inet-aton "127.0.0.1")) ; get internal representation of 127.0.0.1 (define sockaddr (make-socket-address AF_INET addr 8080)) ; make a struct sockaddr (define sock (socket PF_INET SOCK_STREAM 0)) ; make a socket (bind sock sockaddr) ; bind the socket to the address That gives me the error In procedure bind: can't assign requested address. So I tried it again allowing any address. (define anyaddr (make-socket-address AF_INET INADDR_ANY 8080)) ; allow any address (bind sock anyaddr) And that works fine. But it's weird, because ifconfig lo0 says lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 So the loopback device is assigned to 127.0.0.1. So my question is, why can't I bind to that address? Thanks. Update: the output of route get 127.0.0.1 is route to: localhost destination: localhost interface: lo0 flags: <UP,HOST,DONE,LOCAL> recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire 49152 49152 0 0 0 0 16384 0

    Read the article

  • File uploads and client_max_body_size in nginx + gunicorn + django

    - by carlosescri
    I need to configure nginx + gunicorn to be able to upload files greater than the default max size in both servers. My nginx .conf file looks like this: server { # ... location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 60; proxy_pass http://localhost:8000/; } } The idea is to allow requests of 20M for two locations: /admin/path/to/upload?param=value /installer/other/path/to/upload?param=value I've tried to add location directives at the same level than the one I've pasted here (getting 404 errors) and also tried to add them inside the location / directive (getting 413 Entity Too Large errors). My location directives look like these in their simplest form: location /admin/path/to/upload/ { client_max_body_size 20M; } location /installer/other/path/to/upload/ { client_max_body_size 20M; } But they don't work (actually I tested lots of combinations and I'm desperate thinking about this. Please, help If you can: What settings do I need to set to make this work? Thank you so much!

    Read the article

  • Prevent nginx from redirecting traffic from https to http when used as a reverse proxy

    - by Chris Pratt
    Here's my abbreviated nginx vhost conf: upstream gunicorn { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; listen 443 ssl; server_name domain.com ~^.+\.domain\.com$; location / { try_files $uri @proxy; } location @proxy { proxy_pass_header Server; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 120; proxy_pass http://gunicorn; } } The same server needs to serve both HTTP and HTTPS, however, when the upstream issues a redirect (for instance, after a form is processed), all HTTPS requests are redirected to HTTP. The only thing I have found that will correct this issue is changing proxy_redirect to the following: proxy_redirect http:// https://; That works wonderfully for requests coming from HTTPS, but if a redirect is issued over HTTP it also redirects that to HTTPS, which is a problem. Out of desperation, I tried: if ($scheme = 'https') { proxy_redirect http:// https://; } But nginx complains that proxy_redirect isn't allowed here. The only other option I can think of is to define the two servers separately and set proxy_redirect only on the SSL one, but then I would have duplicate the rest of the conf (there's a lot in the server directive that I omitted for simplicity sake). I know I could also use an include directive to factor out the redundancy, but I really want to keep just one conf file without any dependencies. So, first, is there something I'm missing that will negate the problem entirely? Or, second, if not, is there any other way (besides including an external file) to factor out the redundant config information so that I can separate out the HTTP and HTTPS versions of the server config?

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >