Search Results

Search found 19087 results on 764 pages for 'zend log'.

Page 258/764 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Ubuntu reboot suddenly

    - by Gladiator
    Its the second day I have this issue, and Ubuntu still reboot suddenly. nothing significatif in syslog. salim@SalimPC:~$ tail -f /var/log/syslog<br> Nov 7 12:34:53 SalimPC dbus[873]: [system] Successfully activated service 'com.ubuntu.SystemService' SalimPC dbus[873]: [system] Activating service name='org.freedesktop.PackageKit' (using servicehelper) SalimPC AptDaemon: INFO: Initializing daemon SalimPC AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer SalimPC dbus[873]: [system] Successfully activated service 'org.freedesktop.PackageKit' SalimPC AptDaemon.PackageKit: INFO: Initializing PackageKit transaction SalimPC AptDaemon.Worker: INFO: Simulating trans:/org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4<br> SalimPC AptDaemon.Worker: INFO: Processing transaction org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4 SalimPC AptDaemon.PackageKit: INFO: Get updates() Nov 7 12:34:58 SalimPC AptDaemon.Worker: INFO: Finished transaction /org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4 ---------------------------------Previous post------------------ Hi My ubuntu has rebooted suddenly (2 time till now in one hour). After login, a crash was indicated in /usr/sbin/ntop. below are the syslog and a screenshot of the crash. salim@SalimPC:~$ tail /var/log/syslog Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (9642->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (8274->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (11010->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (17850->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (8274->8232) Nov 6 18:25:39 ntop[1630]: last message repeated 2 times Nov 6 18:25:39 SalimPC ntop[1630]: **WARNING** packet truncated (16482->8232) Nov 6 18:25:40 SalimPC ntop[1630]: **WARNING** packet truncated (11010->8232) Nov 6 18:25:43 SalimPC ntop[3075]: THREADMGMT[t3063068672]: ntop RUNSTATE: PREINIT(1) Nov 6 18:25:43

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • Sporadic '.Xauthority not writable, changes will be ignored' going from OSX -> Linux

    - by Kamil Kisiel
    Every now and then when users SSH from their OS X (Snow Leopard) workstation to one of our Linux hosts they receive the message: /usr/bin/xauth: ~/.Xauthority not writable, changes will be ignored Of course, their X forwarded applications will not work at this point. However, if they log out and log right back in again they do not get the message and everything works as expected. On their Mac they get their home directory via AFP. The Linux machines get it via NFS. Any ideas on what could be going on here?

    Read the article

  • SSH closing by itself - root works fine

    - by Antti
    I'm trying to connect to a server but if i use any other user than root the connection closes itself after a successful login: XXXXXXX:~ user$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to XXXXXXX.XXXXXX.XXX [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file /Users/user/.ssh/identity type -1 debug1: identity file /Users/user/.ssh/id_rsa type -1 debug1: identity file /Users/user/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXXXXXX.XXXXXX.XXX' is known and matches the RSA host key. debug1: Found key in /Users/user/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /Users/user/.ssh/woo_openssh debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Offering public key: /Users/user/.ssh/sidlee.dsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/user/.ssh/identity debug1: Trying private key: /Users/user/.ssh/id_rsa debug1: Offering public key: /Users/user/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. Last login: Mon Mar 29 01:41:51 2010 from 193.67.179.2 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to XXXXXXX.XXXXXX.XXX closed. Transferred: sent 2976, received 2136 bytes, in 0.5 seconds Bytes per second: sent 5892.2, received 4229.1 debug1: Exit status 1 If i log in as root the exact same way it works as expected. I've added the users i want to log in with to a group (sshusers) and added that group to /etc/sshd_config: AllowGroups sshusers I'm not sure what to try next as i don't get a clear error anywhere. I would like to enable specific accounts to log in so that i can disable root. This is a GridServer/Media Temple (CentOS).

    Read the article

  • Error building main Guest Additions Module while installing VirtualBox guest additions

    - by Praveen Sripati
    I have installed Ubuntu 12.10 Guest on Ubuntu 12.04 Host using VirtualBox. Everything is from repository and no direct install. When I install the guest additions, the below error is shown in the console. Before running the command I mapped the VBoxGuestAdditions.iso in the Guest. The closest I could get is this article which says to install the latest version of VirtualBox (not the one from the repository). Is there any alternate solution? sudo ./VBoxLinuxAdditions.run Verifying archive integrity... All good. Uncompressing VirtualBox 4.1.12 Guest Additions for Linux......... VirtualBox Guest Additions installer Removing installed version 4.1.12 of VirtualBox Guest Additions... Removing existing VirtualBox DKMS kernel modules ...done. Removing existing VirtualBox non-DKMS kernel modules ...done. Building the VirtualBox Guest Additions kernel modules The headers for the current running kernel were not found. If the following module compilation fails then this could be the reason. Building the main Guest Additions module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Doing non-kernel setup of the Guest Additions ...done. Installing the Window System drivers Warning: unknown version of the X Window System installed. Not installing X Window System drivers. Installing modules ...done. Installing graphics libraries and desktop services components ...done.

    Read the article

  • Why "object reference not set to an instance of an object" doesn't tell us which object?

    - by Saeed Neamati
    We're launching a system, and we sometimes get the famous exception NullReferenceException with the message Object reference not set to an instance of an object. However, in a method where we have almost 20 objects, having a log which says an object is null, is really of no use at all. It's like telling you, when you are the security agent of a seminar, that a man among 100 attendees is a terrorist. That's really of no use to you at all. You should get more information, if you want to detect which man is the threatening man. Likewise, if we want to remove the bug, we do need to know which object is null. Now, something has obsessed my mind for several months, and that is: Why .NET doesn't give us the name, or at least the type of the object reference, which is null?. Can't it understand the type from reflection or any other source? Also, what are the best practices to understand which object is null? Should we always test nullability of objects in these contexts manually and log the result? Is there a better way?

    Read the article

  • Severe errors in solr configuration: Error loading class

    - by James Lawruk
    Installed Solr on my Windows XP PC. Tomcat seems to be working fine. Cannot get Solr to work. I noticed the TrieDateField is declared in a file called schema.xml in the SolrHome directory. Any thoughts? The Url http://localhost:8080/solr/ returns: HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: false in null ------------------------------------------------------------- org.apache.solr.common.SolrException: Unknown fieldtype 'date' specified on field updated Here is an excerpt from the catalina log file: SEVERE: org.apache.solr.common.SolrException: Error loading class 'solr.TrieDateField' at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:273) Caused by: java.lang.ClassNotFoundException: solr.TrieDateField

    Read the article

  • Server Application Unavailable ?

    - by suryasasidhar
    hi, i am a developer and i developed the web application(asp.net). It is working in my local server fine when i take new domain and upload the site in to that domain i am getting this error hi, After completion of my project. I placed in online the default page is coming but when i click on any link button it is giving this error can you help me. m3connect.in is url of my site and error is Server Application Unavailable The web application you are attempting to access on this web server is currently unavailable. Please hit the "Refresh" button in your web browser to retry your request. Administrator Note: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur.

    Read the article

  • Mapping tomcat apache worker

    - by metamorpheus
    I am running an Apache2 server connected with Tomcat5.5 Workers.properties workers.tomcat_home=/usr/share/tomcat5.5 workers.java_home=/usr/lib/jvm/java-6-sun ps=/ worker.list=worker1 worker.worker1.port=8009 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13 worker.worker1.lbfactor=1 The JkMount is defined as follows LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel debug JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkMount /jsp-examples worker1 JkMount /jsp-examples/* worker1 JkMount /servlets-examples worker1 JkMount /servlets-examples/* worker1 JkMount /tcontainer worker1 JkMount /tcontainer/* worker1 If i call 127.0.0.1/servlets-examples, i get the examples displayed and executed correctly. If i call [same server as above]/tcontainer, i get the following error: The requested resource (/tcontainer) is not available. (this is an error provided by tomcat5.5) How can i define where to get the sources? i have a configuration file in /usr/share/tomcat-5.5-webapps/tcontainer.xml: <Context path="/tcontainer" docBase="/var/www/web96/html/tcontainer" debug="0" privileged="true" allowLinking="true"> </Context> What did i forget to configure or what is wrong with my definitions? Thanks

    Read the article

  • Apache no longer starts at Windows boot up

    - by w3d
    I have Apache installed as part of XAMPP - local test server. It is configured as a Windows (XP) Service. Startup type is "Automatic". For a long time now it has always started when Windows boots up, but recently this has stopped happening. I now need to start it manually via the XAMPP Control Panel - at which point it appears to start up perfectly OK. The only recent updates to the machine (that I recall) are Windows Updates - none of which appear to have "known issues" that relate to this. And updates to Google Chrome. Any ideas what could prevent Apache from starting automatically at Windows (XP) boot up? EDIT#1 There are 2 related Errors in my system event log regarding the Service Control Manager: Timeout (30000 milliseconds) waiting for the Apache2.2 service to connect. The Apache2.2 service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. When I manually start the Apache server after boot up there are 2 "information" events stating that it was "sent a start control" and that it "entered the running state". Although I notice it appears to take 19 seconds between the start control being sent and entering a running state - according to the event log. So, maybe 30 seconds during boot up isn't long enough (anymore) for Apache to start??

    Read the article

  • port to subdomain

    - by takeshin
    I have installed Hudson using apt-get, and the Hudson server is available on example.com:8080. For example.com I use standard port *:80 and some virtual hosts set up this way: # /etc/apache2/sites-enabled/subdomain.example.com <Virtualhost *:80> ServerName subdomain.example.com ... </Virtualhost> Here is info about Hudson process: /usr/bin/daemon --name=hudson --inherit --env=HUDSON_HOME=/var/lib/hudson --output=/var/log/hudson/hudson.log --pidfile=/var/run/hudson/hudson.pid -- /usr/bin/java -jar /usr/share/hudson/hudson.war --webroot=/var/run/hudson/war 987 ? Sl 1:08 /usr/bin/java -jar /usr/share/hudson/hudson.war --webroot=/var/run/hudson/war How should I forward: http:// example.com:8080 to: http:// hudson.example.com

    Read the article

  • mysql my.cnf ignored

    - by mr12086
    [issue] I'm trying to modify a my.cnf value on my production server but the changes aren't taking effect after a sudo service mysql restart, using an exact copy of the my.cnf (downloaded and replaced original) on my development server the changes made are visible from show variables in mysql commandline. my.cnf is located at /etc/mysql/my.cnf sudo find / -name my.cnf /etc/mysql/my.cnf So only one file exists on the entire system.. Production is ubuntu 10.04 LTS 64bit Development is ubuntu 11.10 32bit Mysql versions are 5.1.61 & 5.1.62 respectively. Kind Regards, [my.cnf] yes it seems to have had all the comments removed and replaced with whitespace. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M innodb_file_per_table = 1 [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • $TERM set to "dumb" causes problems with suspend

    - by julkiewicz
    I've just upgraded from 11.04 to 11.10. So far I love it, everything seems just so much snappier. Now I just have one minor issue. When I try to suspend my laptop, it doesn't work - instead it fades out the screen, blocks it and then instantly wakes back. I've checked the logs in /var/log/pm-suspend.log and this fragment seems relevant: /usr/lib/pm-utils/sleep.d/000kernel-change suspend suspend: success. Running hook /usr/lib/pm-utils/sleep.d/00clear suspend suspend: TERM environment variable not set. /usr/lib/pm-utils/sleep.d/00clear suspend suspend: Returned exit code 1. Sat Nov 19 12:23:20 CET 2011: Inhibit found, will not perform suspend Sat Nov 19 12:23:20 CET 2011: Running hooks for resume The mentioned script at /usr/lib/pm-utils/sleep.d/00clear reads: #!/bin/bash clear When I open a terminal anywhere by hand, $TERM is set to either "linux" or "xterm". However somehow when the 00clear command is executed $TERM is set to "dumb". Two questions: What is the correct value for $TERM when running 00clear script? Where can I set it up? I've looked for solutions on the web, however I could only find information on how to configure $TERM in a regular terminal (and this one is set properly).

    Read the article

  • Apache VERY high page load time

    - by Aaron Waller
    My Drupal 6 site has been running smoothly for years but recently has experienced intermittent periods of extreme slowness (10-60 sec page loads). Several hours of slowness followed by hours of normal (4-6 sec) page loads. The page always loads with no error, just sometimes takes forever. My setup: Windows Server 2003 Apache/2.2.15 (Win32) Jrun/4.0 PHP 5 MySql 5.1 Drupal 6 Cold fusion 9 Vmware virtual environment DMZ behind a corporate firewall Traffic: 1-3 hits/sec avg Troubleshooting No applicable errors in apache error log No errors in drupal event log Drupal devel module shows 242 queries in 366.23 milliseconds,page execution time 2069.62 ms. (So it looks like queries and php scripts are not the problem) NO unusually high CPU, memory, or disk IO Cold fusion apps, and other static pages outside of drupal also load slow webpagetest.org test shows very high time-to-first-byte The problem seems to be with Apache responding to requests, but previously I've only seen this behavior under 100% cpu load. Judging solely by resource monitoring, it looks as though very little is going on. Here is the kicker - roughly half of the site's access comes from our LAN, but if I disable the firewall rule and block access from outside of our network, internal (LAN) access (1000+ devices) is speedy. But as soon as outside access is restored the site is crippled. Apache config? Crawlers/bots? Attackers? I'm at the end of my rope, where should I be looking to determine where the problem lies?

    Read the article

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Cyrus: How Do I Configure saslauthd For Authentication?

    - by Nick
    I'm trying to get Cyrus IMAP (v 2.2 on Ubuntu 9.04) setup and working, but I'm having a bit of trouble getting the login working correctly. I've created a mailbox for my test user "nrahl": cm user/nrahl and then created a password: $ saslpasswd2 nrahl I'm attempting to connect to the mailbox using Thunderbird. I'm using the machine's LAN IP address as the host, and "nrahl" as the username. It connects to the server and prompts me for the password. When I enter it, I get "Login to server failed." in Thunderbird, and /var/log/mail.log shows: Apr 15 19:20:01 IMAP cyrus/imap[1930]: accepted connection Apr 15 19:20:09 IMAP cyrus/imap[1930]: badlogin: [192.168.5.21] plaintext nrahl SASL(-13): authentication failure: checkpass failed Part of /etc/imapd.conf with comments removed: sieveusehomedir: false sievedir: /var/spool/sieve #mailnotifier: zephyr #sievenotifier: zephyr #dracinterval: 0 #drachost: localhost hashimapspool: true allowplaintext: yes sasl_mech_list: PLAIN #allowapop: no #sasl_maximum_layer: 256 #loginrealms: example.com #virtdomains: userid #defaultdomain: sasl_pwcheck_method: saslauthd #sasl_auxprop_plugin: sasldb sasl_auto_transition: no UPDATE: When setting: sasl_pwcheck_method: alwaystrue in /etc/imapd.conf, login works correctly. So I'm assuming the issue is saslauthd related.

    Read the article

  • Using Squid on Debian, Cannot Connect Error

    - by Zed Said
    I am trying to set up Squid on Debian and am getting a connection refused error: squidclient http://www.apple.com/ > test client: ERROR: Cannot connect to 127.0.0.1:3128: Connection refused Here is my config: visible_hostname none cache_effective_user proxy cache_effective_group proxy cache_dir ufs /var/spool/squid 2048 16 256 cache_mem 512 MB cache_access_log /var/log/squid/access.log emulate_httpd_log on strip_query_terms off read_ahead_gap 128 Kb collapsed_forwarding on refresh_stale_hit 30 seconds retry_on_error on maximum_object_size_in_memory 1 MB acl all src 0.0.0.0/0.0.0.0 acl purgehosts src 127.0.0.1/255.255.255.255 # Caching static objects in __data is important. # Without that, apache processes sit around spooling static objects. acl QUERY urlpath_regex /cgi-bin/ /_edit /_admin /_login /_nocache /_recache /__lib /__fudge acl PURGE method PURGE acl POST method POST cache deny QUERY cache deny POST http_access allow PURGE purgehosts http_access deny PURGE http_access allow all http_port 127.0.0.1:80 http_port 50.56.206.139:80 cache_peer 127.0.0.1 parent 80 0 originserver no-query no-digest default redirect_rewrites_host_header off read_ahead_gap 128 Kb shutdown_lifetime 5 seconds Any ideas why this is happening? What have I missed?

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • Drop in solution for logging to DB

    - by Jake
    I'm considering setting up our servers to log to a Mongo Database rather than log files. Logs will then be all on one server, queryable, and overall easier to manage. I'd love to find a solution that will allow all the different processes I have running to write to DB rather than files (or perhaps something to read the files, pass the logs on and truncate the files). I don't want to have to find a different solution for every process if I can avoid it. So, does anyone know of an existing solution to this problem?

    Read the article

  • cron not even sending local mail to /var/mail/

    - by Yang
    I'm using a very plain Ubuntu Server 9.04, and cron isn't delivering any mail to my /var/mail/USER (the file hasn't even been created). Here's my full crontab: # m h dom mon dow command 15 * * * * $HOME/.cron/sync-bookmarks.bash If I add # m h dom mon dow command 15 * * * * $HOME/.cron/sync-bookmarks.bash >& /tmp/log then I see the stdout and stderr in /tmp/log. I'm not (yet) interested in actual remote email delivery, just local delivery to the mail spool file. Why isn't mail working? Thanks in advance for any tips.

    Read the article

  • Opscode Chef nginx compile from source issue reports successful run but does nothing

    - by v_abhi_v
    I am trying to install nginx from source in Opscode Chef and its bit weird, it runs complaining nothing but does not install it either. This is how my role attributes look like looks like "nginx":{ "default_site_enabled":false, "version":"1.2.6", "init_style":"init", "install_method":"source", "configure_flags":[ "--without-http_access_module", "--without-http_auth_basic_module", "--without-http_autoindex_module", "--without-http_browser_module", "--without-http_charset_module", "--without-http_fastcgi_module", "--without-http_memcached_module", "--without-http_referer_module", "--without-http_scgi_module", "--without-http_split_clients_module" ], "log_dir":"/var/log/nginx", "binary":"/opt/nginx/sbin/nginx", "source":{ "prefix":"/opt/nginx/dist", "modules":["http_ssl_module", "http_gzip_static_module" ] } }, The chef log shows: [2012-12-19T02:37:44+00:00] INFO: Processing bash[compile_nginx_source] action run (nginx::source line 82) [2012-12-19T02:37:45+00:00] INFO: bash[compile_nginx_source] ran successfully I am clueless on what's going on :(

    Read the article

  • rsync command deletion error "IO error encountered -- skipping file deletion"

    - by Jam88
    I use rsync command to take backup of files from one of my ubuntu server to another ubuntu machine. Backup server trigger a script that use rysnc command. Here is the command I use rsync -rltvh --partial --stats --exclude=.beagle/ --exclude=.* --delete-after root@live_server:/home/ /home/live_server_backup/home /tmp/logfile.log 2&1 live_server is ssh-able without password. So it works. Now problem is with --delete-after option After all file synced .At the end I can see deletion procedure skipped.logfile error is like IO error encountered -- skipping file deletion When i tried to find log there were some error while file sync rsync: send_files failed to open "/home/xyz/Desktop/PPT_session_1_context.pdf": Permission denied (13) So my understanding is as rsync could not read all the files from target for safety reason it is skipping the file deletion. Is there any way to make --delete-after work even if there is some permission error? I do not want to use force deletion as it will be dangerous in some situation.

    Read the article

  • Intermittent freeze of Windows 7 x64 on laptop

    - by georged
    I have Lenovo W500, 8GB RAM, 250GB Crucial SSD running Windows 7 x64 using boot-to-vhd. Lately I started to experience some random freezes for 5-30 seconds. And when I say "freeze" I mean it; it's like a time machine - no input taken, nothing is written anywhere, no log, no disk activity. And then suddenly machine would wake up and continue as if nothing happened. I do suspect that it's one of the drivers but I don't see any updates in the log lately except for .NET 4 framework and Adobe Reader. Out of new or updated software I can only recall Skype and IE9 but freezes started to occur definitely before IE install. Regular software includes but not limited to: Office 2010, VMWare Workstation, IE9, Chrome, Skype, Seesmic/TweeterDeck, Live Writer. How should I approach the "hunting season" to find our the culprit? Any tools or packages that could help me to identify the component, program, driver that causes the freeze? Cheers

    Read the article

  • Oracle EE 11.2g: how to generate fresh new redo logs

    - by Aikanaro
    Hi, In the company I work for we are heavy users of vmware machines. Almost all our projects are developed inside a virtual environment up to the point where we have to deploy them into a production system. While in development, some colleagues of mine deleted the redo log files for Oracle in the hopes of gaining some free space. Now they are unable to start the database instance. Is there a way of generating a fresh new redo log so that the instance can be started? This is urgent and even though I'm currently googling for an answer I have yet to find it. Thanks in advance.

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >