Search Results

Search found 53354 results on 2135 pages for 'compilation error'.

Page 527/2135 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • rsync to EC2 using ssh -i

    - by isomorphismes
    I'm able to ssh -i mykey.pem to EC2. I'm able to scp -i mykey.pem to EC2. But when I try to rsync -avz -e "ssh -i mykey.pem" I get this error: Warning: Identity file mykey.pem not accessible: No such file or directory. Permission denied (publickey). rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(605) [sender=3.0.9] Any suggestions what I've done wrong?

    Read the article

  • Active Desktop cannot be restored on Win XP

    - by Phil.Wheeler
    This is more of a major annoyance than anything that's stopping me from doing my work, but I somehow seem to have had Active Desktop on my work XP machine get corrupted and now can't get it back working again. I've tried browsing to C:\Documents and Settings\%my-user-name%\Application Data\Microsoft\Internet Explorer and changing, deleting or replacing the Desktop.htt file, but it's not achieving anything. The error I was previously getting when trying to click the "Restore my active desktop" button was: Internet Explorer Script Error An error has occurred in the script on this page. Line: 65 Char: 1 Error: Object doesn't support this action Code: 0 URL: file://C:/Documents%20and%20Settings//Application %20Data/Microsoft/Internet%20Explorer/Desktop.htt Any ideas?

    Read the article

  • Visual Studio 2005 won't install on Windows 7

    - by Peanut
    Hi, My question relates very closely to this question: http://superuser.com/questions/34190/visual-studio-2005-sp1-refuses-to-install-in-windows-7 However this question hasn't provided the answer I'm looking for. I'm trying to install Visual Studio 2005 onto a clean Windows 7 (64 bit) box. However I keep getting the following error when the 'Microsoft Visual Studio 2005' component finishes installing ... Error 1935.An error occurred during the installation of assembly 'policy.8.0.Microsoft.VC80.OpenMP,type="win32-policy",version="8.0.50727.42",publicKeyToken="1fc8b3b9a1e18e3b",processorArchitecture="x86",Please refer to Help and Support for more information. HRESULT: 0x80073712. On my first attempt to install VS 2005 I got a warning about compatibility issues. I stopped at this point, downloaded the necessary service packs and restarted the installation from the beginning. Every since then I just get the error message above. I keep rolling back the installation and trying again ... it's but always the same error. Any help would be very much appreciated. Thanks.

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    Hello, I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • PFSense CSR Generation

    - by ErnieTheGeek
    I'm trying to figure out how to generate a CSR so I can generate and install a SSL cert. Here's a LINK to what I've what tried. Granted that post was for m0n0wall, but I figured openssl is openssl. Heres where I get stuck. When I run this: /usr/bin/openssl req -new -key mykey.key -out mycsr.csr -config /usr/local/ssl/openssl.cnf I get this: error on line -1 of /usr/local/ssl/openssl.cnf 54934:error:02001002:system library:fopen:No such file or directory:/usr/src/secure/lib/libcrypto/../../../crypto/openssl /crypto/bio/bss_file.c:122:fopen('/usr/local/ssl/openssl.cnf','rb') 54934:error:2006D080:BIO routines:BIO_new_file:no such file:/usr/src/secure/lib/libcrypto/../../../crypto/openssl/crypto/ bio/bss_file.c:125: 54934:error:0E078072:configuration file routines:DEF_LOAD:no such file:/usr/src/secure/lib/libcrypto/../../../crypto/open ssl/crypto/conf/conf_def.c:197:

    Read the article

  • Weird Apache Crash (with Dump) zend_hash_find (), libphp5.so

    - by Jacob84
    To be honest I don't have experience working with Apache. I'm just putting the best of my intentions on solving this and don't know if I'm making it right. So any help will be greatly appreciated. We have a php page wich is throwing the following message in the browser: Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. The logs from /var/log/httpd doesn't seem to help because It seems that the Apache is unable to write any information. So the exception or error is preventing the writing (maybe ocurring in some stage of the process that makes impossible to log?). I've read about the procedure to make dumps of the apache, and here we have the content: Reading symbols from /lib64/libgpg-error.so.0...(no debugging symbols found)...done. Loaded symbols for /lib64/libgpg-error.so.0 Reading symbols from /usr/lib64/php/modules/zip.so...(no debugging symbols found)...done. Loaded symbols for /usr/lib64/php/modules/zip.so Core was generated by `/usr/sbin/httpd'. Program terminated with signal 11, Segmentation fault. 0 0x00007fb828fff712 in zend_hash_find () from /etc/httpd/modules/libphp5.so Missing separate debuginfos, use: debuginfo-install httpd-2.2.15-15.el6.centos.1.x86_64 I've been looking in the PHP files and I haven't found any direct call to zend_hash_find (wich seems to be causing the error). I've been looking at Google but found nothing related. Can somebody please help? Is there any step that I need to accomplish to know more? Thanks a lot, as always!

    Read the article

  • Kunagi LDAP configuration problems

    - by Willem de Vries
    We recently started with Scrum at our company and we wanted to start using Kunagi to test and see how it works. So I installed the kunagi_0.23.2.deb packet that I downloaded from their website, on my Ubuntu 11.04 running in tomcat6 using openjdk-6-jre. everything works fine except I can't get the LDAP to work. I have one AD server and one LDAP at my disposal for testing. For the LDAP I use the following info: -uri: ldap://192.168.1.11:389 -user: some_tested_user -passwd: the_pass -DN: dc=colosa,dc=net -LDAP Filter: (&(objectClass=user)) I tested various LDAP Filters, I don't know if I have the right one. However I get an erro when clicking "test LDAP". The error refers to the DN: Server service call error Calling service TestLdap failed. java.lang.RuntimeException: InvalidNameException: [LDAP: error code 34 - invalid DN] With the AD server I get no error while testing, yet I am not able to login I get: "Login faild" every time. I don't know if this is because of the LDAP Filter I entered, yet I can't get it to work. I have read this http://kunagi.org/iss652.html stating that I need to create my accounts inside Kunagi before I can login. So I did this with no effect. So basically my question is, what causes this DN string error (I am sure mine is right), and what LDAP Filter should i use? Any help would be highly appreciated.

    Read the article

  • Create App_Data and register Excel application on ASP.NET deployment? (IIS7.5)

    - by Francesco
    I am deploying an ASP.NET MVC3 application in IIS7. I already deployed other applications but they never made use of the App_Data folder or any additional component such as the Interop library. I used the one click deployement and I sue the default application pool. When I launch the application I immediately get an error stating: [web access] Sorry, an error occurred while processing your request. [browse from IIS7] Could not find a part of the path 'D:\Data\Apps\OppUpdate\App_Data\Test.xlsx'. Then I manually added the App_Data folder inside the deployment directory and the application starts regularly. Then when it comes to the taks that uses the Interop library, I get the following error: [web access] Sorry, an error occurred while processing your request. [browse from IIS7] Retrieving the COM class factory for component with CLSID {00024500-0000-0000-C000-000000000046} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). Is there any way to automatically add the App_Data folder when using 1 click deploy? How can I register the Interop services? Thanks you, Francesco

    Read the article

  • Issue with IIS...SERVER APPLICATION UNAVAILABLE

    - by SVI
    My web application was running absolutely fine. Just 2 days back, I got an error saying. SERVER APPLICATION UNAVAILABLE I am pretty certain that nothing was changed on IIS. Unless my automatic Windows updates screwed it up completely. My event viewer had zillions of following errors in Application category. aspnet_wp.exe could not be started. The error code for the failure is C0000005. This error can be caused when the worker process account has insufficient rights to read the .NET Framework files. Please ensure that the .NET Framework is correctly installed and that the ACLs on the installation directory allow access to the configured account. I reinstalled IIS. After installing, i ran aspnet_regiis -i for framework v2 and now it throws error saying - The application could not be initialized properly. Any ideas what going on?

    Read the article

  • Bind9 wont start on ubuntu 9.10

    - by Logikdev
    Ever since i've upgraded to ubuntu 9.10, bind9 wont start: Setting up bind9 (1:9.6.1.dfsg.P1-3ubuntu0.2) ... * Starting domain name service... bind9 [fail] invoke-rc.d: initscript bind9, action "start" failed. dpkg: error processing bind9 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: bind9 E: Sub-process /usr/bin/dpkg returned an error code (1) Any ideas?

    Read the article

  • Cannot install passenger with Nginx

    - by Luc
    Hello, I have a rack application that I want to migrate from Ruby 1.8.7 + Apache + passenger to Ruby 1.9.1 + Nginx + passenger. I have made up the following script for a quick install all in one, and it raises an error... Here is the installation script: (basic one with all the steps I need to install everything on a Ubuntu 10.04 Lucid Lynx fresh box) Nginx sources cd /tmp wget http://nginx.org/download/nginx-0.7.66.tar.gz tar xzf nginx-0.7.66.tar.gz cd nginx-0.7.66 openssl required for SSL/TLS sudo apt-get install openssl sudo apt-get install libssl-dev Compilation stuff sudo apt-get zlib1g-dev Ruby interpreter 1.9.1 sudo apt-get install ruby1.9.1 ruby1.9.1-dev rubygems1.9.1 irb1.9.1 ri1.9.1 rdoc1.9.1 build-essential nginx libopenssl-ruby1.9.1 Make sure default ruby uses version 1.9.1 sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 400 --slave /usr/share/man/man1/ruby.1.gz ruby.1.gz /usr/share/man/man1/ruby1.9.1.1.gz --slave /usr/bin/ri ri /usr/bin/ri1.9.1 --slave /usr/bin/irb irb /usr/bin/irb1.9.1 --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.1 sudo update-alternatives --config ruby Passenger (rake-0.8.7, fastthread-1.0.7, rack-1.1.0, passenger-2.2.14) sudo gem install passenger Activate Passenger in nginx, select option 2 to use nginx sources donwloaded above cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module And this is the error message I got: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/ContentHandler.c gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /tmp/pcre-8.00 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx/StaticContentHandler.o \ /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c: In function ‘passenger_static_content_handler’: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c:71: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’ make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/tmp/nginx-0.7.66' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /var/lib/gems/1.9.1/gems/passenger-2.2.14/doc/Users guide Nginx.html I do not understand the reason of this error. Is this a compatibility problem ? Hope you have any clues :) Thanks a lot, Luc

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • How to remove all Couchdb versions in Ubuntu 10.04 (server)? ( after multiple installs )

    - by DjangoRocks
    Hi all, I have done multiple installs of CouchDB using sudo aptitude install couchdb sudo ap-get install couchdb and more recently based on the instructions found at L http://wiki.apache.org/couchdb/Installing_on_Ubuntu May I know how do I uninstall or remove all the above installations? Best Regards. +++++++++++++++++++UPDATE++++++++++++++++++++++++ I've tried running the following commands: apt-get remove couchdb apt-get purge couchdb but received the following errors: (Reading database ... 39814 files and directories currently installed.) Removing couchdb ... invoke-rc.d: initscript couchdb, action "stop" failed. dpkg: error processing couchdb (--remove): subprocess installed pre-removal script returned error exit status 1 invoke-rc.d: initscript couchdb, action "start" failed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: couchdb E: Sub-process /usr/bin/dpkg returned an error code (1) May I know how do i fix this? ON issuing the command : dpkg -l | grep couchdb I received the following response: rF couchdb 0.10.0-1ubuntu2 RESTful document oriented database, system D iF couchdb-bin 0.10.0-1ubuntu2 RESTful document oriented database, programs How do i uninstall CouchDB ? I think there's some file corruption?

    Read the article

  • A faulty Caviar Blue hard drive?

    - by Glister
    We have a small "homemade" server running fully updated Debian Wheezy (amd64). One hard drive installed: WDC WD6400AAKS. The motherboard is ASUS M4N68T V2. The usual load: CPU: an average of 20% Each week about 50GB of additional space is occupied. About 47GB of uploaded files and 3GB of MySQL data. I'm afraid that the hard drive may be about to fail. I saw Pre-fail on few places when I ran: root@SERVER:/tmp# smartctl -a /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-4-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Blue Serial ATA Device Model: WDC WD6400AAKS-XXXXXXX Serial Number: WD-XXXXXXXXXXXXXXXXXXX LU WWN Device Id: 5 0014ee XXXXXXXXXXXXX Firmware Version: 01.03B01 User Capacity: 640,135,028,736 bytes [640 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Oct 28 18:55:27 2013 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x85) Offline data collection activity was aborted by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 247) Self-test routine in progress... 70% of test remaining. Total time to complete Offline data collection: (11580) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 136) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 157 146 021 Pre-fail Always - 5108 4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2968 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 051 Old_age Always - 0 9 Power_On_Hours 0x0032 079 079 000 Old_age Always - 15445 10 Spin_Retry_Count 0x0032 100 100 051 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 051 Old_age Always - 0 12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always - 2950 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 426 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 2968 194 Temperature_Celsius 0x0022 111 095 000 Old_age Always - 36 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 160 000 Old_age Always - 21716 200 Multi_Zone_Error_Rate 0x0008 200 200 051 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15444 - Error SMART Read Selective Self-Test Log failed: scsi error aborted command Smartctl: SMART Selective Self Test Log Read Failed root@SERVER:/tmp# In one tutorial I read that the pre-fail is a an indication of coming failure, in another tutorial I read that it is not true. Can you guys help me decode the output of smartctl? It would be also nice to share suggestions what should I do if I want to ensure data integrity (about 50GB of new data each week, up to 2TB for the whole period I'm interested in). Maybe I will go with 2x2TB Caviar Black in RAID4?

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • mysql is not connecting after data directory change

    - by user123827
    I've changed data directory in /etc/my.cnf. datadir=/data/mysql socket=/data/mysql/mysql.sock I also moved mysql folder from /var/lib/mysql/ to /data/mysql Now when i connect to mysql i get following error: [root@youradstats-copy mysql]# mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) also when i see /var/logs/msqld.log i get following messages in that: InnoDB: Setting log file /data/mysql/ib_logfile0 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 120704 7:43:31 InnoDB: Log file /data/mysql/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /data/mysql/ib_logfile1 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 InnoDB: Cannot initialize created log files because InnoDB: data files are corrupt, or new data files were InnoDB: created when the database was started previous InnoDB: time but the database was not shut down InnoDB: normally after that. 120704 7:43:36 [ERROR] Plugin 'InnoDB' init function returned error. 120704 7:43:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. I shut down mysql properly before doing these changes and then started it properly but dont know why getting these messages. please help to solve issue as i have changed socket path in my.cnf but still its pointing to old path...

    Read the article

  • Problems starting MySQL on Mac OS X

    - by Jon
    I am not able to start MySQL server on Mac OS X 10.4.11. MySQL was installed using Macports. MySQL was running fine until it suddenly died without any obvious reason. When running "mysql", I get the error message: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/opt/local/var/run/mysql5/mysqld.sock' (2) If I try to start MySQL manually, I get the following error message: sudo /opt/local/share/mysql5/mysql/mysql.server start Starting MySQL/opt/local/share/mysql5/mysql/mysql.server: line 159: kill: (636) - No such process ERROR! In /etc/mysql/my.cnf I have: socket = __PREFIX/var/run/mysqld/mysqld.sock But the path "opt/local/var/run/mysqld/" does not exist on my system. I tried to change the socket path to "__PREFIX/var/run/mysql5/ mysqld.sock" (which is where the socket is located). Unfortunately, this did not help either. Owner and Permissions for /opt/local/var/run/mysql5/ are correctly set. Any suggestions on how to start MySQL again? Thanks for your advice.

    Read the article

  • VPN not connecting after resuming from standby on Vista home premium

    - by joe
    Hello, I am having windows vista home premium OS on my laptop. I am having a windows VPN setup too. All of a sudden my vpn doesnt work when my PC resumed from stand by mode. I tried all options but still I am getting that error. CoId={5B82082D-9236-4AF9-900B-4E8072341C76}: The user workgroup\user dialed a connection named VPNName which has failed. The error code returned on failure is 0. I am also getting the following error when I verified the event logs The user workgroup\user dialed a connection named VpnName which has failed. The error code returned on failure is 800. Please help.

    Read the article

  • MySQL 5.6 won't start on OS X - ambiguous option

    - by MaticPetek
    I would like to try MySQL 5.6 on my machine, but I cannot start it. I always get an error : [ERROR] /usr/local/mysql-5.6.5-m8-osx10.6-x86/bin/mysqld: ambiguous option '--log=/var/log/mysqld.log' (log-bin, log_slave_updates) my.cnf: [mysqld]<br/> pid-file=/usr/local/mysql-5.6.5-m8-osx10.6-x86/mysql.pid<br/> log-error=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-error.log<br/> log-slow-queries=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-slowquery.log<br/> log-bin=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-bin.log<br/> general_log_file=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql-general_log_file.log<br/> log=/usr/local/mysql-5.6.5-m8-osx10.6-x86/data/mysql.log<br/> I try to set "log" and "log-bin" paramether in my.cnf file and also as start parameters for mysqld, but with no luck. Any idea what I can do? Thank you. My environment: OS X 10.6.8 mysql-5.6.5-m8-osx10.6-x86 (not _x64 version) Note: I'm also running Mysql 5.5 on this machine (different port and socket). I also try to stop this instance but I get the some error.

    Read the article

  • How to repair a damage transaction log file for Exchange 2003

    - by Markus Larsson
    Hi! Yesterday we had a power failure and the UPS did not work (it has worked perfect before). Everything seem to be ok when I started all the servers again except of the mail, when I try to mount the store I get the following message: “The database files in this store are corrupted” Server: Exchange 2003 running on a Small Business Server Latest full backup: one week old Backup program: Backup Exec 9.0 This is what I have done: 1. Copy every file in the MDBDATA folder (edb, stm, log) 2. Run Eseutil /d for priv1.edb 3. Run Eseutil /p for priv1.edb (took seven hours) 4. Run Isintig –fix –test alltests, now it breaks down. Isintig fails with the following error: Isinteg cannot initiate verification process. Please review the log file for more information. The problem is that there is no log file created. 5. Giving up on this route I decide to do a restore from the backup, it fails with the following error: Unable to read the header of logfile E00.log. Error -501, and the error: Information Store (5976) Callback function call ErrESECBRestoreComplete ended with error 0xC80001F5 The log file is damaged. My conclusion is that E00.log is damage, so how can I repair it so that I can restore the database? Or should I give up and try some other route?

    Read the article

  • How can I connect a Windows 8 PC to a Samba domain

    - by Paul
    I am using Samba 3, and want to join my Windows 8 PC to the Samba domain. Windows 8 cannot join out of the box, so I added the following registry entries: HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters DWORD DomainCompatibilityMode 1 DWORD DNSNameResolutionRequired 0 And now it talks to the Samba server ok, however I get the following error: And I notice that the machine name created on the samba server does not match its name: win-8jq3fg1n74e$:x:30003:30003:Machine:/var/lib/nobody:/bin/false It is like it is using an internal name. The following is the error in the smb.log [2012/10/21 14:26:16.099520, 0] passdb/pdb_interface.c:348(pdb_default_create_user) _samr_create_user: Running the command `/usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false win-8jq3fg1n74e$' gave 9 [2012/10/21 14:26:28.143224, 0] lib/util_sock.c:474(read_fd_with_timeout) [2012/10/21 14:26:28.143420, 0] lib/util_sock.c:1441(get_peer_addr_internal) getpeername failed. Error was Transport endpoint is not connected read_fd_with_timeout: client 0.0.0.0 read error = Connection reset by peer.

    Read the article

  • Fix Corrupted Ruby in Mac OS X Lion

    - by luckyb56
    I screwed up my ruby buy executing the command sudo easy_install pip> /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It showed error: Couldn't find index page for '-e' (maybe misspelled?) No local packages or download links found for -e error: Could not find suitable distribution for Requirement.parse('-e') After that when I tried to install Brew by: /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It shows error which I have no idea: /usr/bin/ruby: line 1: Searching: command not found /usr/bin/ruby: line 2: Best: command not found /usr/bin/ruby: line 3: Processing: command not found Usage: pip COMMAND [OPTIONS] pip: error: No command by the name pip 1.1 (maybe you meant "pip install 1.1") /usr/bin/ruby: line 5: Installing: command not found /usr/bin/ruby: line 6: Installing: command not found /usr/bin/ruby: line 8: Using: command not found /usr/bin/ruby: line 9: Processing: command not found /usr/bin/ruby: line 10: Finished: command not found /usr/bin/ruby: line 11: Searching: command not found /usr/bin/ruby: line 12: Reading: command not found /usr/bin/ruby: line 13: syntax error near unexpected token `(' /usr/bin/ruby: line 13: `Scanning index of all packages (this may take a while)' Can this be fixed?

    Read the article

  • bind9 dlz/mysql at ubuntu segfault libmysqlclient.so

    - by Theos
    I have a big problem. I installed the bind9 nameserver to three different computer. two Ubuntu 10.04.4 LTS, and one Ubuntu 11.10 I compiled it 9.7.0, 9.7.3, 9.9.0 with this method: ./configure --prefix=/usr --sysconfdir=/etc/bind --localstatedir=/var \ --mandir=/usr/share/man --infodir=/usr/share/info \ --enable-threads --enable-largefile --with-libtool --enable-shared --enable-static \ --with-openssl=/usr --with-gssapi=/usr --with-gnu-ld \ --with-dlz-mysql=yes --with-dlz-bdb=no \ --with-dlz-filesystem=yes --with-geoip=/usr make make install After the set up for dlz/mysql, the BIND server is working perfetctly until 5-30 minute long. Ahter i got segfault. I resolve temporaly the problem with a simple process watchdog, and if the named is stopped, the watchdog is restart it, but this is not a good idea in long therm. My log output is: messages: Apr 13 19:33:51 dnsvm kernel: [ 8.088696] eth0: link up Apr 13 19:33:58 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.082572] named[1027]: segfault at 88 ip b71c4291 sp b5adfe30 error 4 in libmysqlclient.so.16.0.0[b714e000+1aa000] Apr 13 19:35:08 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.457510] named[1423]: segfault at 68 ip b71d6122 sp b52f0a40 error 4 in libmysqlclient.so.16.0.0[b7160000+1aa000] Apr 13 19:35:09 WATCHDOG: named not running. Restarting Apr 13 19:41:56 dnsvm kernel: [ 494.838206] named[1448]: segfault at 88 ip b731c291 sp b5436e30 error 4 in libmysqlclient.so.16.0.0[b72a6000+1aa000] Apr 13 19:41:57 WATCHDOG: named not running. Restarting Apr 13 19:57:26 dnsvm kernel: [ 1424.023409] named[2976]: segfault at 88 ip b72d1291 sp b6beee30 error 4 in libmysqlclient.so.16.0.0[b725b000+1aa000] Apr 13 19:57:26 WATCHDOG: named not running. Restarting Apr 13 20:11:56 dnsvm kernel: [ 2294.324663] named[6441]: segfault at 88 ip b7357291 sp b6473e30 error 4 in libmysqlclient.so.16.0.0[b72e1000+1aa000] Apr 13 20:11:57 WATCHDOG: named not running. Restarting syslog: http://pastebin.com/hjUyt8gN the first server is a native, normal x64 server (u1004lts), the second is virtualised server (u11.10) the third is also virtualised (10.04lts) This servers is only for dns providing with mysql server db. But the problem is be with all server, and all bind version. named.conf: http://pastebin.com/zwm1yP7V Can anybody help me, or any good idea?

    Read the article

  • Can't upgrade my Ubuntu server, it gets stuck on openjdk-6-jre-headless

    - by Jean-Nicolas Boulay Desjardins
    I am using Ubuntu Server. When I do: apt-get upgrade it gets stuck on: Setting up openjdk-6-jre-headless (6b20-1.9.7-0ubuntu1) ... Why? And what can I do to stop it? I tried removing it with apt-get... I get this error: E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem. So then I tried this: dpkg --purge openjdk-6-jre-headless I got this: dpkg: dependency problems prevent removal of openjdk-6-jre-headless: openjdk-6-jre-lib depends on openjdk-6-jre-headless (>= 6b17). ca-certificates-java depends on openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless; however: Package openjdk-6-jre-headless is to be removed. Package java6-runtime-headless is not installed. Package openjdk-6-jre-headless which provides java6-runtime-headless is to be removed. ca-certificates-java depends on openjdk-6-jre-headless (>= 6b16-1.6.1-2) | java6-runtime-headless; however: Package openjdk-6-jre-headless is to be removed. Package java6-runtime-headless is not installed. Package openjdk-6-jre-headless which provides java6-runtime-headless is to be removed. dpkg: error processing openjdk-6-jre-headless (--purge): dependency problems - not removing Errors were encountered while processing: openjdk-6-jre-headless The thing is I think my DB is using it... Not sure... I am using Cassandra with Thrift... Yes, it's getting a bit more complex... # dpkg --configure -a I get: dpkg: dependency problems prevent configuration of openjdk-6-jre: openjdk-6-jre depends on openjdk-6-jre-headless (>= 6b20-1.9.7-0ubuntu1); however: Package openjdk-6-jre-headless is not configured yet. dpkg: error processing openjdk-6-jre (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place dpkg: dependency problems prevent configuration of libaccess-bridge-java: libaccess-bridge-java depends on default-jre | openjdk-6-jre | sun-java6-jre; however: Package default-jre is not installed. Package openjdk-6-jre is not configured yet. Package sun-java6-jre is not installed. dpkg: error processing libaccess-bridge-java (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of icedtea-6-jre-cacao: icedtea-6-jre-cacao depends on openjdk-6-jre-headless (= 6b20-1.9.7-0ubuntu1); however: Package openjdk-6-jre-headless is not configured yet. dpkg: error processing icedtea-6-jre-cacao (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libaccess-bridge-java-jni: libaccess-bridge-java-jni depends on libaccess-bridge-java (>= 1.26.2-5); however: Package libaccess-bridge-java is not configured yet. dpkg: error processing libaccess-bridge-java-jni (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: openjdk-6-jre libaccess-bridge-java icedtea-6-jre-cacao libaccess-bridge-java-jni Thanks again for any help.

    Read the article

  • What breaks in a Windows domain if a member has a high time skew?

    - by Ryan Ries
    It's taken for granted by most IT people that in a Windows domain, if a member server's clock is off by more than 5 minutes (or however many minutes you've configured it for) from that of its domain controller - logons and authentications will fail. But that is not necessarily true. At least not for all authentication processes on all versions of Windows. For instance, I can set my time on my Windows 7 client to be skewed all to heck - logoff/logon still works fine. What happens is that my client sends an AS_REQ (with his time stamp) to the domain controller, and the DC responds with KRB_AP_ERR_SKEW. But the magic is that when the DC responds with the aforementioned Kerberos error, the DC also includes his time stamp, which the client in turn uses to adjust his own time and resubmits the AS_REQ, which is then approved. This behavior is not considered a security threat because encryption and secrets are still being used in the communication. This is also not just a Microsoft thing. RFC 4430 describes this behavior. So my question is does anyone know when this changed? And why is it that other things fail? For instance, Office Communicator kicks me off if my clock starts drifting too far out. I really wish to have more detail on this. edit: Here's the bit from RFC 4430 that I'm talking about: If the server clock and the client clock are off by more than the policy-determined clock skew limit (usually 5 minutes), the server MUST return a KRB_AP_ERR_SKEW. The optional client's time in the KRB-ERROR SHOULD be filled out. If the server protects the error by adding the Cksum field and returning the correct client's time, the client SHOULD compute the difference (in seconds) between the two clocks based upon the client and server time contained in the KRB-ERROR message. The client SHOULD store this clock difference and use it to adjust its clock in subsequent messages. If the error is not protected, the client MUST NOT use the difference to adjust subsequent messages, because doing so would allow an attacker to construct authenticators that can be used to mount replay attacks.

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >