Search Results

Search found 636 results on 26 pages for 'retry'.

Page 1/26 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Nginx Retry of Requests ( Nginx - Haproxy Combination )

    - by vaibhav
    I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ). So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped . So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request. ( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far ) Nginx Conf File upstream gae_sleep { server 128.111.55.219:10000; } server { listen 8080; server_name 128.111.55.219; root /var/apps/sleep/app; # Uncomment these lines to enable logging, and comment out the following two #access_log /var/log/nginx/sleep.access.log upstream; error_log /var/log/nginx/sleep.error.log; access_log off; #error_log /dev/null crit; rewrite_log off; error_page 404 = /404.html; set $cache_dir /var/apps/sleep/cache; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gae_sleep; client_max_body_size 2G; proxy_connect_timeout 30; client_body_timeout 30; proxy_read_timeout 30; } location /404.html { root /var/apps/sleep; } location /reserved-channel-appscale-path { proxy_buffering off; tcp_nodelay on; keepalive_timeout 55; proxy_pass http://128.111.55.219:5280/http-bind; } }

    Read the article

  • Is it possible to disable the retry mechanism in Exim

    - by Tony Meyer
    I have a very simple Exim configuration that's just forwarding all mail to a set of destination addresses. When immediate delivery to an address fails, the message is added to the queue (and then processed by the retry rules). I want to change this so that if immediately delivery fails, the message is :blackhole:d. (It's ok if a bounce is generated instead, as I'll just redirect the bounce to the :blackhole:). This needs to occur for temporary failures (i.e. 4xx) as well as permanent (i.e. 5xx) ones. I understand that this means that if delivery can't be done immediately the message will be permanently and irretrievably lost. In this particular context, that isn't a problem. Reading over this, it sounds suspiciously like "how can I improve my spamming Exim server". That really isn't what this is for, and if you can figure out a way I can prove that, I'm happy to do so!

    Read the article

  • Why am i getting "error:1409F07F:SSL routines:SSL3_WRITE_PENDING: bad write retry" error while attem

    - by Amit Ben Shahar
    I came across this error, and had to look in a lot of placed until i was able to find the reason for this error and thought this might save someone else the hassle. the reason is pretty simple: when SSL_Write returns with SSL_ERROR_WANT_WRITE or SSL_ERROR_WANT_READ, you have to repeat the call to SSL_write with the same parameters again, after the condition is satisfied. Calling it with different parameters, will yield the 1409F07F bar write retry error. Hope this saved someone's important time :P Amit.

    Read the article

  • Can a language support something like "Retry/Fix"?

    - by Aaron Anodide
    I was just wondering if a language could support something like a Retry/Fix block? The answer to this question is probably the reason it's a bad idea or equivalent to something else, but the idea keeps popping into my head. void F() { try { G(); } fix(WrongNumber wn, out int x) { x = 1; } } void G() { int x = 0; retry<int> { if(x != 1) throw new WrongNumber(x); } } After the fix block ran, the retry block would run again...

    Read the article

  • Unity3D : Retry Menu (Scene Management)

    - by user3666251
    I'm making a simple 2D game for Android using the Unity3D game engine. I created all the levels and everything but I'm stuck at making the game over/retry menu. So far I've been using new scenes as a game over menu. I used this simple script: #pragma strict var level = Application.LoadLevel; function OnCollisionEnter(Collision : Collision) { if(Collision.collider.tag == "Player") { Application.LoadLevel("GameOver"); } } And this as a 'menu': #pragma strict var myGUISkin : GUISkin; var btnTexture : Texture; function OnGUI() { GUI.skin = myGUISkin; if (GUI.Button(Rect(Screen.width/2-60,Screen.height/2+30,100,40),"Retry")) Application.LoadLevel("Easy1"); if (GUI.Button(Rect(Screen.width/2-90,Screen.height/2+100,170,40),"Main Menu")) Application.LoadLevel("MainMenu"); } The problem stands at the part where I have to create over 200 game over scenes, obstacles (the objects that kill the player) and recreate the same script over 200 times for each level. Is there any other way to make this faster and less painful?

    Read the article

  • Unity3D Android : Game Over/Retry

    - by user3666251
    Im making a simple 2D game for android using the Unity3D game engine.I created all the levels and everything but Im stuck at making the game over/retry menu.So far I've been using new scenes as a game over menu.I used this simple script : pragma strict var level = Application.LoadLevel; function OnCollisionEnter(Collision : Collision) { if(Collision.collider.tag == "Player") { Application.LoadLevel("GameOver"); } } And this as a 'menu' : #pragma strict var myGUISkin : GUISkin; var btnTexture : Texture; function OnGUI() { GUI.skin = myGUISkin; if (GUI.Button(Rect(Screen.width/2-60,Screen.height/2+30,100,40),"Retry")) Application.LoadLevel("Easy1"); if (GUI.Button(Rect(Screen.width/2-90,Screen.height/2+100,170,40),"Main Menu")) Application.LoadLevel("MainMenu"); } The problem stands at the part where I have to create over 200 game over scenes,obscales(the objects that kill the player) and recreate the same script over 200 times for each level. Is there any other way to make this faster and less painful? I've been searching the web but didn't find anything useful according to my issue. Thank you.

    Read the article

  • Force delivery retry without restarting the SMTP Service on Windows Server 2008 R2

    - by Mathias R. Jessen
    I have a Windows Server 2008 R2 box hosting 3 virtual SMTP servers; vSMTP01, vSMTP02 and vSMTP03. The first two are configured to deliver all messages to dedicated smarthosts, while the last is set to just deliver the messages on its own. All other delivery settings are as default ----(vSMTP01)-----> {SMARTHST01} / ----Inbound mail--->---SMTPSRV01---[----(vSMTP02)-----> {SMARTHST02} \ ----(vSMTP03)-----> { Internet } Now I want to take SMARTHST01 out for maintenance, but I don't want to reject submissions to vSMTP01 while doing so, so I just let it continue running. When SMARTHST01 is no longer responding, vSMTP01 queues the messages and wait for the first retry interval to pass (15 minutes). So far so good. Let's say SMARTHST01 gets online again after 20 minutes. The first interval has passed, and I'll have to wait another 25 minutes for the second retry interval to pass. If I stop and start the SMTP Service (Services.msc - Simple Mail Transfer Protocol service - Stop), the server will retry all deliveries, but that would cause a service interruption for ALL virtual SMTP servers on the machine, which is highly undesirable. How can I manually force vSMTP01 to retry delivery of all queued messages without interrupting the service of vSMTP02 and vSMTP03?

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • Getting NFS clients to retry mount if NFS server down when client boots

    - by z0mbix
    I have an NFS server that several clients mount. I am using the following in my /etc/exports on the server: /content *(rw,no_root_squash) and on the clients in my /etc/fstab I have: content.prd.domain.tld:/content /content nfs rw,hard,intr 0 0 If the clients boot while the NFS server is down, the share does not get mounted. I read in the NFS man page that the retry defaults should handle this: retry=n The number of minutes to retry an NFS mount operation in the foreground or background before giving up. The default value for forground mounts is 2 minutes. The default value for background mounts is 10000 minutes, which is roughly one week. I have tested this, but it doesn't appear to work. Am I missing something? All servers are RHEL 5.4. Cheers z0mbix

    Read the article

  • EF 4.0 : Save Changes Retry Logic

    - by BGR
    Hi, I would like to implement an application wide retry system for all entity SaveChanges method calls. Technologies: Entity framework 4.0 .Net 4.0 namespace Sample.Data.Store.Entities { public partial class StoreDB { public override int SaveChanges(System.Data.Objects.SaveOptions options) { for (Int32 attempt = 1; ; ) { try { return base.SaveChanges(options); } catch (SqlException sqlException) { // Increment Trys attempt++; // Find Maximum Trys Int32 maxRetryCount = 5; // Throw Error if we have reach the maximum number of retries if (attempt == maxRetryCount) throw; // Determine if we should retry or abort. if (!RetryLitmus(sqlException)) throw; else Thread.Sleep(ConnectionRetryWaitSeconds(attempt)); } } } static Int32 ConnectionRetryWaitSeconds(Int32 attempt) { Int32 connectionRetryWaitSeconds = 2000; // Backoff Throttling connectionRetryWaitSeconds = connectionRetryWaitSeconds * (Int32)Math.Pow(2, attempt); return (connectionRetryWaitSeconds); } /// <summary> /// Determine from the exception if the execution /// of the connection should Be attempted again /// </summary> /// <param name="exception">Generic Exception</param> /// <returns>True if a a retry is needed, false if not</returns> static Boolean RetryLitmus(SqlException sqlException) { switch (sqlException.Number) { // The service has encountered an error // processing your request. Please try again. // Error code %d. case 40197: // The service is currently busy. Retry // the request after 10 seconds. Code: %d. case 40501: //A transport-level error has occurred when // receiving results from the server. (provider: // TCP Provider, error: 0 - An established connection // was aborted by the software in your host machine.) case 10053: return (true); } return (false); } } } The problem: How can I run the StoreDB.SaveChanges to retry on a new DB context after an error occured? Something simular to Detach/Attach might come in handy. Thanks in advance! Bart

    Read the article

  • Rails app complaining can't connect to memcached but I'm pretty sure it's running

    - by centipedefarmer
    All was well, then I rebooted the server. Right now: $ ps aux | grep memcache 1000 27168 0.0 0.0 121972 1056 pts/0 Sl 15:18 0:00 memcached -m 64 -p 11211 -u nobody -l 127.0.0.1 1000 27816 0.0 0.0 7628 956 pts/0 S+ 15:36 0:00 grep memcache meanwhile the rails app's log is getting tons of this: MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011) Being that I'm more of a developer than a server guy, and being that we don't really have a "server guy," and this being in production... where do I start with this?

    Read the article

  • Retry failed downloads - Google Music Manager

    - by Severin
    After my old computer died I wanted to download my music collection from Google Music - which is very easy with Googles Music Manager. After a few hours it had downloaded my library, but also showed around 300 download errors. Downloading the library another time, however, made different files fail (Which tells me that the error isn't connected to the files as such). Does anybody know of a way how to retry only the songs that failed downloading?

    Read the article

  • Easiest way to retrofit retry logic on LINQ to SQL migration to SQL Azure

    - by Pat James
    I have a couple of existing ASP .NET web forms and MVC applications that currently use LINQ to SQL with a SQL Server 2008 Express database on a Windows VPS: one VPS for both IIS and SQL. I am starting to outgrow the VPS's ability to effectively host both SQL and IIS and am getting ready to split them up. I am considering migrating the database to SQL Azure and keeping IIS on the VPS. After doing initial research it sounds like implementing retry logic in the data access layer is a must-do when adopting SQL Azure. I suspect this is even more critical to implement in my situation where IIS will be on a VPS outside of the Azure infrastructure. I am looking for pointers on how to do this with the least effort and impact on my existing code base. Is there a good retry pattern that can be applied once at the LINQ to SQL data access layer, as opposed to having to wrap all of my LINQ to SQL operations in try/catch/wait/retry logic?

    Read the article

  • NServiceBus Retry Delay

    - by mattcodes
    What is the optimal way to configure/code NServiceBus to delay retrying messages, in its default configuration retry happens almost immediately upto the number of attempts defined in the config file. I'd ideally like to retry again after an hour etc.. Also how does HandleCurrentMessageLater() work, what does the Later aspect refer to?

    Read the article

  • Apache httpd workers retry

    - by David Newcomb
    I have an Apache httpd web server running mod_proxy and mod_proxy_balancer. The whole of /somedir is sent to 2 worker machines which service the requests using the round robin scheduler. Each worker machine is running IIS but I don't think that is important. I can demonstrate the load balancer working by repeatedly requesting a single page which contains the IP address of the machine and can see that it switches from one to the other in a predictable round robin fashion. If I switch off one of the IIS servers and start requesting the same page then each page only contains the IP address of the machine that is up. However, if I start IIS and don't run my IIS application then /somedir returns 500 (as it should). I've added 500 to the failonstatus (Apache 2.4) so when it hits the error Apache places the worker machine into error state. Apache still returns the proxy error to the client though. How can I make Apache catch the proxy failure and retry using a different worker in the same way that a connection failure does. Update There is almost the same question asked in StackOverflow so joining them together. http://stackoverflow.com/questions/11083707/httpd-mod-proxy-balancer-failover-failonstatus-transperant-switching

    Read the article

  • .NET Webservice Client: Auto-retry upon call failure

    - by Yon
    Hi, We have a .NET client calling a Java webservice using SSL. Sometimes the call fails due to poor connectivity (the .NET Client is a UI that is used from the weirdest locations). We would like to implement an automatic retry mechanism that will automatically retry a failed call X times before giving up. This should be done solely with specific types of connectivity exceptions (and not for exceptions generated by the web service itself). We tried to find how to do it on the Binding/Channel level, but failed... any ideas? Thanks, yonadav

    Read the article

  • Manually forcing TCP connection to retry

    - by Vi.
    I have a TCP connection (SSH session to some computer for example) Network suddenly goes down and drops all packets (disconnected cable, out of range). TCP resends packets again and again, retrying with increasing delays. I see the problem and plug the cable back (or restore network somehow). TCP connection finally successfully resends some packet and continues. The problem is that I need to wait for a some timeout on point 5. I want to use my opened SSH session now and not wait for 5-10 seconds until it finds out that connection is working again. How to force all TCP connections to resend data without delays in GNU/Linux?

    Read the article

  • Manually forcing TCP connection to retry

    - by Vi
    I have a TCP connection (SSH session to some computer for example) Network suddenly goes down and drops all packets (disconnected cable, out of range). TCP resends packets again and again, retrying with increasing delays. I see the problem and plug the cable back (or restore network somehow). TCP connection finally successfully resends some packet and continues. The problem is that I need to wait for a some timeout on point 5. I want to use my opened SSH session now and not wait for 5-10 seconds until it finds out that connection is working again. How to force all TCP connections to resend data without now in GNU/Linux?

    Read the article

  • How to retry connections with wget?

    - by Andrei
    I have a very unstable internet connection, and sometimes have to download files as large as 200 MB. The problem is that the speed frequently drops and sits at --, -K/s and the process remains alive. I thought just to send some KILL signals to the process, but as I read in the wget manual about signals it doesn't help. How can I force wget to reinitialize itself and pick the download up where it left off after the connection drops and comes back up again? I would like to leave wget running, and when I come back, I want to see it downloading, and not waiting with speed --,-K/s.

    Read the article

  • Would you want a language to support something like "Retry/Fix"?

    - by Aaron Anodide
    I was just wondering if a language could support something like a Retry/Fix block? The answer to this question is probably the reason it's a bad idea or equivalent to something else, but the idea keeps popping into my head. void F() { try { G(); } fix(WrongNumber wn, out int x) { x = 1; } } void G() { int x = 0; retry<int> { if(x != 1) throw new WrongNumber(x); } } After the fix block ran, the retry block would run again...

    Read the article

  • Retry web service call if authentication failure requires re-login

    - by Pete
    I'm consuming a web service from C#, and the web service requires a login call and then uses cookie sessions. The web service will time out sessions after a certain timeframe, after which the client will have to re-login. I'd like to find a way to automatically catch the soap fault the service sends back in this scenario, and handle it by re-logging in and then retrying the previously attempted call. I would prefer to do this somehow automatically for all the web service methods in question, rather than having to manually wrap the calls with the retry logic. Suggestions?

    Read the article

  • php: simplexml exception retry

    - by exceptionhandlingnoob
    I am querying an API using SimpleXML which will occasionally fail for reasons unknown. I would like to have the script retry up to 5 times. How can I do this? I assume it has something to do with wrapping the object in a try/catch, but I'm not very experienced with this -- have tried to read the manual on exception handling but am still at a loss. Thanks for any help :) // set up xml handler $xmlstr = file_get_contents($request); $xml = new SimpleXMLElement($xmlstr); Here is the error message I am receiving: [function.file-get-contents]: failed to open stream: HTTP request failed!

    Read the article

  • Circular Reference for Parent Link on Work Item cannot be resolved by Retry

    - by Atters
    I receive the following error; OH-TFS-Connector-0051: Operation failed getCollectionMetaData. Server Error : TF201063: Adding a Parent link to work item 1737 would result in a circular relationship. To create this link, evaluate the existing links, and remove one of the other links in the cycle. I have completely flattened out the Work Items in the source project. When retrying the migration the timestamp is modified on the pending errors however the issues are not resolved. These Work Items now have no parents or children in the source project. So I'm wondering if the retry list is no longer valid but there doesn't appear to be a away to have it update? I can run the whole migration again, however it takes 5-7 hours to just do the work items so it would be great if there is a quick fix.

    Read the article

  • Drupal module to retry failed emails

    - by JavadocMD
    I'm trying to find an existing Drupal module to fit the bill: basically, when an email fails to send, it should save the email and automatically attempt to resend the email later. I'm using the SMTP module to relay emails through an SMTP gateway (required by the hosting provider), but every once in a while the connection is refused - probably due to the gateway being too busy.

    Read the article

  • How to retry opening a properties file in Java

    - by Hoggy
    I'm trying to handle an FileNotFoundException in Java by suspending the thread for x seconds and rereading the file. The idea behind this is to edit properties during runtime. The problem is that the programm simply terminates. Any idea how to realize this solution?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >