Search Results

Search found 25198 results on 1008 pages for 'failed request tracing'.

Page 384/1008 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • Locale misconfig. Debian

    - by JakeTheFish
    perl -e 'print "Hello\n";' perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Hello I'v tried to do export LC_ALL=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 And it workis, till I log out. Is there any permanent solution?

    Read the article

  • are you supposed to be able to "ping" specific pages of websites, or just the domain name?

    - by Bec
    (sorry, i think my jargon is a bit off there, not sure) I'm trying to work out what's going on with my podcasts not downloading properly, to see whether it was my pod-catching software or the connection i tried doing a ping on the podcast URL e.g. www.abc.net.au/rn/podcast/feeds/ockham.xml and it failed (i got "could not find host"), it works for the first part of it though www.abc.net.au I can get to the xml page in a web browser though, and ping doesn't work on the podcasts which have been downloading right either.

    Read the article

  • Apache: getting proxy, rewrite, and SSL to play nice

    - by Rich M
    Hi, I'm having loads of trouble trying to integrate proxy, rewrite, and SSL altogether in Apache 2. A brief history, my application runs on port 8080 and before adding SSL, I used proxy to strip the 8080 from the url's to and from the server. So instead of www.example.com:8080/myapp, the client app accessed everything via www.example.com/myapp Here was the conf the accomplished this: ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> ProxyPass /myapp http://www.example.com:8080/myapp ProxyPassReverse /myapp http://www.example.com:8080/myapp What I'm trying to do now is force all requests to myapp to be HTTPS, and then have those SSL requests follow the same proxy rules that strip out the port number as my application used to. Simply changing the ports 8080 to 8443 in the ProxyPass lines does not accomplish this. Unfortunately I'm not an expert in Apache, and my skills of trial and error are already reaching the end of the line. RewriteEngine On RewriteCond %{HTTPS} off RewriteRule myapp/* https://%{HTTP_HOST}%{REQUEST_URI} ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> SSLProxyEngine on ProxyPass /myapp https://www.example.com:8443/mloyalty ProxyPassReverse /myapp https://www.example.com:8433/mloyalty As this stands, a request to anything on the server other than /myapp load fine with http. If I make a browser http request to /mypp it then redirects to https:// www.example.com:8443/myapp , which is not the desired behavior. Links within the application then resolve to https:// www.example.com/myapp/linkedPage , which is desirable. Browser requests (http and https) to anything one level beyond just /myapp ie. /myapp/mycontext resolve to https:// www.example.com/myapp/mycontext without the port. I'm not sure what other information there is for me to give, but I think my goals should be clear.

    Read the article

  • Creating CLR Assembly in SQLServer 2005

    - by jangwenyi
    I am getting the following error message when I try install my .NET assembly int SqlServer 2005. My .NET assembly references 'ChilkatDotNet2.dll' assembly. Msg 6544, Level 16, State 1, Line 1 CREATE ASSEMBLY for assembly 'myassembly' failed because assembly 'chilkatdotnet2' is malformed or not a pure .NET assembly. Unverifiable PE Header/native stub. Any ideas how to resolve, workaround?

    Read the article

  • Undelivered Mail Returned to Sender

    - by Alex
    When sending to [email protected] via PHP mail() function, I receive mails. When sending emails from external machines, I receive the following (e.g., sending from [email protected]. [mail.ru is Russian gmail]): This is the mail system at host fallback2.mail.ru. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: lost connection with mail.mydomain.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Reporting-MTA: dns; fallback2.mail.ru X-mPOP-Fallback_MX-Queue-ID: D8C19F2411F1 X-mPOP-Fallback_MX-Sender: rfc822; [email protected] Arrival-Date: Tue, 29 Oct 2013 10:09:21 +0400 (MSK) Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 4.4.2 Diagnostic-Code: X-mPOP-Fallback_MX; lost connection with mail.tld.com[xxx.xxx.xxx.xxx] while receiving the initial server greeting Here is my postfix main.cf: command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix myhostname = mail.mydomain.com mydomain = mydomain.com myorigin = mydomain.com inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 in_flow_delay = 1s alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mail_name = mydomain.com daemon debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES bounce_queue_lifetime = 4h maximal_queue_lifetime = 4h delay_warning_time = 1h strict_rfc821_envelopes = yes show_user_unknown_table_name = no allow_percent_hack = no swap_bangpath = no smtpd_delay_reject = yes smtpd_error_sleep_time = 20 smtpd_soft_error_limit = 1 smtpd_hard_error_limit = 3 smtpd_junk_command_limit = 2 mydestination = mydomain.com, localhost.localdomain, localhost smtpd_client_restrictions = permit_inet_interfaces smtpd_recipient_limit = 100 virtual_alias_domains = mydomain.com virtual_alias_maps = hash:/etc/postfix/virtual smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination Why emails from external server are not being delivered? Thank you! Update In a log, the following lines appear a lot of times Oct 30 10:48:29 mydomain postfix/smtpd[16216]: connect from fallback5.mail.ru[94.100.176.59] Oct 30 10:48:29 mydomain postfix/smtpd[16216]: warning: SASL: Connect to private/auth failed: Connection refused Oct 30 10:48:29 mydomain postfix/smtpd[16216]: fatal: no SASL authentication mechanisms It appears I have to configure SASL? I would understand if I would like to send emails from postfix, but why do I need it to receive emails?

    Read the article

  • has anyone got Riak working on Solaris or OpenSolaris?

    - by Zubair
    has anyone got Riak working on Solaris or OpenSolaris? When I try to compile it I get: user@opensolaris:~/riak# gmake all rel ./rebar compile /usr/bin/env: No such file or directory gmake: *** [compile] Error 127 user@opensolaris:~/riak# mkdir /usr/bin/env mkdir: Failed to make directory "/usr/bin/env"; File exists user@opensolaris:~/riak#

    Read the article

  • How to check the result of a script with monit?

    - by matnagel
    Is there a way to check the result of a script with monit? For example if a script returns 0 it is ok, but 1 means failed. My workaround is to call the script with cron and write the result to a file and check the file with monit. But I am sure monit can do it more elegantly?

    Read the article

  • Outlook Calendar Attachments to have limited access to just Required attendees

    - by Jason Pearce
    The management team at my company often attaches documents (Word, Excel, PDFs) to their Outlook Calendar meeting requests. The meeting requests are sent to the managers, but also to their assistants. The desire is to have everyone be able to view the full meeting request and its content, but limit the ability to open the attachments to just the managers. Is there a way in Outlook 2003 and/or 2007 to limit access to attachments that accompany meeting requests? Ideally, can access to the attachments be controlled by the "Select Attendees and Resources" window when selecting individuals from the Global Address List. Can those in the Required field have access to the attachments while those in the Optional or Resources fields not have access? My suggestion was to simply place all meeting attachments in a shared network folder that has read/write access limited to managers. They would then just place fully qualified links to those files in the body of the Meeting Request. While everyone would receive and see the links, only a few would have access. This, however, wasn't easy enough for them, so I'm looking for some other ideas.

    Read the article

  • KVM-Guest does not boot: qemudParsePCIDeviceStrs

    - by markus
    I have a Server running Ubuntu 10.10 Server-Edition kvm, and libvirt (both ubuntu-native packages) HDD-Partitioning was done with LVM. Then I created some VMs with Virt-Manager and assigned LVM-Volumes to the VMs. Now the VMs do not boot. Virt-Manager shows a CPU-Usage of 100% for this Guest and the VNC-Connection states Booting from Hard Disk The VM-specific logfiles do not show any abnormality only syslog shows a warning warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed What can I do to find the error?

    Read the article

  • REMOTE_USER not getting set?

    - by landed
    I am trying to setup LDAP Authentication in Joomla using a plugin called JMapMyLDAP (in fact 4 plugins each doing a different job). I need to pull a part of a string out of the server variable REMOTE_USER and this should be visible (we see here http://timplummer.com.au/4-how-to-integrate-joomla-3-with-active-directory-using-ldap.html) in phpinfo(); The issue is that REMOTE_USER is not set or at least not appearing. A few things to note (if you don't mind) here- conceptually I am not really understanding authentication as a whole subject it appears to be vast despite my years working in websites. Yes I used asp and built php pages to check a user is who they say they are with a token(/session?) that was given to just them and then they are identified when a stateless request is made to the server. Thats my level of understanding. This sounds different to the basic authentication in apache where a password sits in a file and a username and the user needs to login to a basic form to get access to the folder/docs this is via an .htaccess file. Ok so with the LDAP to work I need to get REMOTE_USER this sounds very reasonable as how else do we know is making the request. Thank you.

    Read the article

  • Failing to connect; my fault?

    - by Majid
    A client has supplied the following info (fake values used here) for connecting to his host and doing my stuff. I have tried connecting from several locations and failed - the connection times out. Am I doing something wrong, or is he forgetting some crucial info? IP: 216.x.y.z MySql Password : password1 User : user1 Password : password2 SSH Port : 49226 ========== root password : password1 Hostname ; host.com Username ; user2 Password : password3 FTP Port : 201

    Read the article

  • SVN - Server sent unexpected return value (500 Internal Server Error)

    - by person
    I'm using RabbitVCS to work with Google Code, and I just recently started having problems with trying to commit. Whenever I try to commit, it says... Commit failed Server sent unexpected return value (500 Internal Server Error) in response to Checkout request for (some file that is involved in the commit. The file it fails on isn't consistent). I have no idea what is wrong, any help is appreciated, thanks.

    Read the article

  • lenovo thinkpad sl500 fn keys & hdd protection on ubuntu

    - by Infestor
    (i use ubuntu 10.04 64bit) i cant get most of my fn keys to work in this laptop. i especially need fn+f8, which switches between trackpoint & touchpad. what i tried: sudo nano /etc/modules (added lenovo-sl-laptop to the file) then: sudo modprobe lenovo-sl-laptop this failed: FATAL: Module lenovo_sl_laptop not found. as for the hdapsd i dont have it installed, since i dont know how to configure it (i guess it helps hdd protection).

    Read the article

  • Exchange 2010 Discovery Search Fails

    - by ITGuy24
    When ever I run an Exchange 2010 SP1 Discovery Search I get the following error: "Search failed as the results link to the target mailbox '[email protected]' couldn't be generated." I have checked to ensure the discovery mailbox is enabled I created a new Discovery mailbox. I get the same error with both Mailboxes. The user account I am using to run the search is a member of the "Discovery Management" security group. I get the same error whether I use the Shell or the ECP to run the search

    Read the article

  • glassfish timeout

    - by Stefano
    Environment: Windows 2008 Server Edition Netbeans 6.7.1 Glassfish 2.1 Apache 2.2.15 for win32 Original problem (almost fixed): The HTTP/1.1 GET method to send data fails if I wait for more than 30 seconds. What I did: I added to the http.conf file of Apache these lines: # # Timeout: The number of seconds before receives and sends time out. # Timeout 9000 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On I went to the Glassfish panel (localhost:4848) and in Configuration HTTP services and I put: Timeout request: 9000 seconds (it was 30) Standby time: -1 (it was 30 seconds) Problem: I am not able to put for glassfish a timeout bigger than 2 minutes to send a GET method. I found this article about glassfish settings, but i'm not able to find WHERE I should put those parameters, and if they could work. Can anybody help try to set this timeout to a higher limit? Maybe it's even a different setting? New tried solution: I went to the glassfish panel control, and to Configuration Subprocesses "Thread-pool-name" and changed the idle timeout from 120 seconds to 1200 seconds. Then I restarted the glassfish service (both from the administrative tools and from asadmin), but still it waits 120 seconds to go idle. I even tried restarting the whole server, still no results. Maybe some setting in postgres? Or the connection of netbeans to postgres through glassfish?

    Read the article

  • New harddrives failing within weeks.

    - by Jason Kealey
    I've experienced 8 hard disk failures in 3 months and have tried many things to solve the issue permanently but I have failed. I would like to know if you have any advice for me. System was running Win XP on an Asus P5W-DH Deluxe. I have setup a RAID-1 array. I started out with 2 x 500 GB 7200RPM Western Digital drives. One died. I took it out to RMA it. On the same day, the router was fried. Assumed a power surge occurred; connected an older UPS to protect the system. Once I got my hands on an identical disk, I installed it. The RAID array was rebuilt. A few days later, the other one died. Assumed the rebuild caused it to fail. Took it out for RMA. Before the other one arrived, the remaining one died. I then discovered I could re-enable them using the Intel Matrix Storage Manager. I re-enabled both and the system seemed fine for a week, until both died again. I got two new 1.5 TB 7200RPM Seagate drives and re-installed Windows 7. Also replaced the UPS and power supply. They both died again. The voltage on the plug is stable between 120 and 122V as per the UPS. None of the other devices have had any problems (monitors, etc.). At this point, I see two options: a) electrical issue in the house that was, for some reason, not blocked by the UPS. b) something else inside the system causing surges? motherboard? onboard raid controller? Failures happen fairly quickly, between 2 and 14 days after I fix the previous issue. I just gotten a new computer (Core i7) to replace it. If it is stable, I can determine that b) was the problem. If it fries its hard drive again, I can determine that it is an electrical issue in the house. Do you have any other thoughts? Any tools I can run on the drives that failed to get more information about the original SMART event history?

    Read the article

  • SQL Connection String to access localhost\SQLEXPRESS

    - by user34683
    I've installed SQL Express on my PC hoping to do some practice creating tables and then modifying them. I coded a webpage in Visual Studio to, basically, SELECT * from a table in the SQLEXPRESS, but I can never get the connection string to work. Please help My connection string "Data Source=localhost\SQLEXPRESS;Initial Catalog=test;User Id=xaa9-PC\xaa9;Password=abcd;" Error Message: Query is select * from tblCustomers where username='johndoe' error is Login failed for user 'xaa9-PC\xaa9'.

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • how to remove vista service pack 1 information

    - by n00b32
    i had a failed SP1 install... now im stuck with the system trying (and failing) to finilize instalation @boot, the system after log-in thinks its SP1 SP1 uninstaller says it cant uninstall SP1 installer says allready installed SP2 installer says install SP1 is there a way to remove SP1 information, fool the system to think it doesnt have service pack and install it again ? i REALLY dont want to reinstall windows. that would suck so badly that id rather stick with this pre-SP1 relic...

    Read the article

  • Nginx and Wordpress side-by-side with static directory alias?

    - by user117161
    I'm a Nginx novice, but I have it set up with Wordpress Multisite (subdirectories) and php-fpm, and it's working great as is. This lets me set up Wordpress sites off the web root: domain.com/site1 - a Wordpress network single site, which renders as expected. domain.com/site2 - ditto etc. Concurrently, I can easily create static files in the web root that don't conflict or interact with Wordpress, and they are also rendered normally. domain.com/hello.html - rendered normally domain.com/hello.php - rendered normally, including php processing domain.com/static/hello.php - rendered normally (along as "static" isn't a WP single site name) What I'd like to do, and this is where I'm out of my depth with nginx.conf, is create a root directory domain.com/static and put static sites in there domain.com/static/site3 domain.com/static/site4 and have Nginx check the request that comes into the root request comes in for: domain.com/site3 and before handing off to Wordpress, check to see if it exists in the /static folder checks: domain.com/static/site3 - static content exists there and if so, serves that content while maintaining the root URI. serves: domain.com/site3 (with content from domain.com/static/site3) if not, it lets Wordpress check if /site3 is a Wordpress single network site as it does now, and the process continues normally. In nginx.conf, in the server section, I start with this try_files rule: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } I then include a bunch of Wordpress specific rules as identified at http://codex.wordpress.org/Nginx under the subdirectory section. I can see that rewrite rules might take care of it easily, but in my experimentation I've only achieved a bunch of looping (/static/static/static, etc.) and managed to bypass Wordpress if the looping stopped. Sorry if this is a very long-winded way of asking a simple question, but I'm definitely learning some of this stuff for the first time. Thanks!

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >