Search Results

Search found 15403 results on 617 pages for 'request querystring'.

Page 187/617 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Duplicate IP address detection with multiple NICs

    - by sfink
    I am using arping -D to detect duplicate IP addresses within a network when setting up servers. (The network is controlled by someone else, and we have had many issues with IP allocation in the past.) It works fine as long as my host has a single NIC on a given VLAN, but when my host has more than one (I have one with 9 NICs on one VLAN and 1 on the other), arping -D always returns false collisions. The problem is that all 9 of my NICs respond to an ARP request for any of the IPs on those NICs. (These are real physical NICs, not aliases or anything.) I send out one ARP request packet, and get 9 ARP is-at ARP replies, one for each MAC address. I could implement my own solution by sniffing packets and checking for any replies with a MAC address other than the local NICs', but it seems like there ought to be an easier way.

    Read the article

  • loadbalancing with difference nginx location context and backend context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • Issue varnish purge through CloudFlare to Varnish

    - by Michael
    I've been working on this for a while and can't seem to find any solution. I have varnish sitting in front of my nginx server, with CloudFlare sitting in front. When I issue a curl -X PURGE host CloudFlare picks it up and of course denies it with a 503 error. If I use direct.host to bypass CloudFlare it hits the Varnish server and it accepts the request but it does nothing since direct.host isn't used so there is nothing in the cache for that url. I am using WordPress and there is a WordPress Varnish Purge plugin, it says to add the following line to wp-config.php: define('VHP_VARNISH_IP','127.0.0.1') This is specifically to work with proxy servers and/or CloudFlare to make sure the request goes to the Varnish server rather than CloudFlare, but that doesn't seem to help. Anyone see this before and have any idea?

    Read the article

  • Firefox requests the master password twice

    - by Mehper C. Palavuzlar
    I've set a master password for Firefox. When Firefox starts, it strangely opens two separate password request windows. When I type in the master password and hit enter, Firefox opens without problems, but the other password request window stays there. I simply close it but it's annoying. Why are there 2 windows as it's enough to type the password once? I've upgraded Firefox from 3.5.5 to 3.5.6 but the problem remains. Any comments? PS: The latest news from this issue can be followed from the related Mozilla Support Forum.

    Read the article

  • Nginx Installation on Ubuntu giving 500 error

    - by user750301
    I just installed nginx on ubuntu 12.04 LTS. When i access localhost it gives me : 500 Internal Server Error nginx/1.2.3 error_log has following rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" This is default nginx configuration: nginx.conf has: include /etc/nginx/sites-enabled/*; /etc/nginx/sites-enabled/default has following root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules }

    Read the article

  • scanning only works under "sudo" (Ubuntu)

    - by JoelFan
    When I try to scan, using simple-scan, the UI says Failed to scan -- Unable to connect to scanner. When I run it from the command line I get: joel@home:/usr/bin$ simple-scan -d ** (simple-scan:6554): DEBUG: Starting Simple Scan 2.32.0.1, PID=6554 ** (simple-scan:6554): DEBUG: Restoring window to 600x400 pixels ** (simple-scan:6554): DEBUG: sane_init () -> SANE_STATUS_GOOD ** (simple-scan:6554): DEBUG: SANE version 1.0.22 ** (simple-scan:6554): DEBUG: Requesting redetection of scan devices ** (simple-scan:6554): DEBUG: Processing request ** (simple-scan:6554): DEBUG: Requesting scan at 300 dpi from device '(null)' ** (simple-scan:6554): DEBUG: scanner_scan ("(null)", 300, SCAN_SINGLE) ** (simple-scan:6554): DEBUG: sane_get_devices () -> SANE_STATUS_GOOD ** (simple-scan:6554): DEBUG: Device: name="brother2:bus4;dev1" vendor="Brother" model="MFC-210C" type="USB scanner" ** (simple-scan:6554): DEBUG: Processing request ** (simple-scan:6554): DEBUG: sane_open ("brother2:bus4;dev1") -> SANE_STATUS_IO_ERROR ** (simple-scan:6554): WARNING **: Unable to get open device: Error during device I/O FYI, I have already done: joel@home:~$ sudo chmod a+rwx /dev/bus/usb joel@home:~$ sudo chmod a+rwx /dev/bus/usb/* If I run under sudo: joel@home:~$ sudo simple-scan it works. How can I get simple-scan to work without sudo?

    Read the article

  • A DirectoryCatalog class for Silverlight MEF (Managed Extensibility Framework)

    - by Dixin
    In the MEF (Managed Extension Framework) for .NET, there are useful ComposablePartCatalog implementations in System.ComponentModel.Composition.dll, like: System.ComponentModel.Composition.Hosting.AggregateCatalog System.ComponentModel.Composition.Hosting.AssemblyCatalog System.ComponentModel.Composition.Hosting.DirectoryCatalog System.ComponentModel.Composition.Hosting.TypeCatalog While in Silverlight, there is a extra System.ComponentModel.Composition.Hosting.DeploymentCatalog. As a wrapper of AssemblyCatalog, it can load all assemblies in a XAP file in the web server side. Unfortunately, in silverlight there is no DirectoryCatalog to load a folder. Background There are scenarios that Silverlight application may need to load all XAP files in a folder in the web server side, for example: If the Silverlight application is extensible and supports plug-ins, there would be a /ClinetBin/Plugins/ folder in the web server, and each pluin would be an individual XAP file in the folder. In this scenario, after the application is loaded and started up, it would like to load all XAP files in /ClinetBin/Plugins/ folder. If the aplication supports themes, there would be a /ClinetBin/Themes/ folder, and each theme would be an individual XAP file too. The application would qalso need to load all XAP files in /ClinetBin/Themes/. It is useful if we have a DirectoryCatalog: DirectoryCatalog catalog = new DirectoryCatalog("/Plugins"); catalog.DownloadCompleted += (sender, e) => { }; catalog.DownloadAsync(); Obviously, the implementation of DirectoryCatalog is easy. It is just a collection of DeploymentCatalog class. Retrieve file list from a directory Of course, to retrieve file list from a web folder, the folder’s “Directory Browsing” feature must be enabled: So when the folder is requested, it responses a list of its files and folders: This is nothing but a simple HTML page: <html> <head> <title>localhost - /Folder/</title> </head> <body> <h1>localhost - /Folder/</h1> <hr> <pre> <a href="/">[To Parent Directory]</a><br> <br> 1/3/2011 7:22 PM 185 <a href="/Folder/File.txt">File.txt</a><br> 1/3/2011 7:22 PM &lt;dir&gt; <a href="/Folder/Folder/">Folder</a><br> </pre> <hr> </body> </html> For the ASP.NET Deployment Server of Visual Studio, directory browsing is enabled by default: The HTML <Body> is almost the same: <body bgcolor="white"> <h2><i>Directory Listing -- /ClientBin/</i></h2> <hr width="100%" size="1" color="silver"> <pre> <a href="/">[To Parent Directory]</a> Thursday, January 27, 2011 11:51 PM 282,538 <a href="Test.xap">Test.xap</a> Tuesday, January 04, 2011 02:06 AM &lt;dir&gt; <a href="TestFolder/">TestFolder</a> </pre> <hr width="100%" size="1" color="silver"> <b>Version Information:</b>&nbsp;ASP.NET Development Server 10.0.0.0 </body> The only difference is, IIS’s links start with slash, but here the links do not. Here one way to get the file list is read the href attributes of the links: [Pure] private IEnumerable<Uri> GetFilesFromDirectory(string html) { Contract.Requires(html != null); Contract.Ensures(Contract.Result<IEnumerable<Uri>>() != null); return new Regex( "<a href=\"(?<uriRelative>[^\"]*)\">[^<]*</a>", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant) .Matches(html) .OfType<Match>() .Where(match => match.Success) .Select(match => match.Groups["uriRelative"].Value) .Where(uriRelative => uriRelative.EndsWith(".xap", StringComparison.Ordinal)) .Select(uriRelative => { Uri baseUri = this.Uri.IsAbsoluteUri ? this.Uri : new Uri(Application.Current.Host.Source, this.Uri); uriRelative = uriRelative.StartsWith("/", StringComparison.Ordinal) ? uriRelative : (baseUri.LocalPath.EndsWith("/", StringComparison.Ordinal) ? baseUri.LocalPath + uriRelative : baseUri.LocalPath + "/" + uriRelative); return new Uri(baseUri, uriRelative); }); } Please notice the folders’ links end with a slash. They are filtered by the second Where() query. The above method can find files’ URIs from the specified IIS folder, or ASP.NET Deployment Server folder while debugging. To support other formats of file list, a constructor is needed to pass into a customized method: /// <summary> /// Initializes a new instance of the <see cref="T:System.ComponentModel.Composition.Hosting.DirectoryCatalog" /> class with <see cref="T:System.ComponentModel.Composition.Primitives.ComposablePartDefinition" /> objects based on all the XAP files in the specified directory URI. /// </summary> /// <param name="uri"> /// URI to the directory to scan for XAPs to add to the catalog. /// The URI must be absolute, or relative to <see cref="P:System.Windows.Interop.SilverlightHost.Source" />. /// </param> /// <param name="getFilesFromDirectory"> /// The method to find files' URIs in the specified directory. /// </param> public DirectoryCatalog(Uri uri, Func<string, IEnumerable<Uri>> getFilesFromDirectory) { Contract.Requires(uri != null); this._uri = uri; this._getFilesFromDirectory = getFilesFromDirectory ?? this.GetFilesFromDirectory; this._webClient = new Lazy<WebClient>(() => new WebClient()); // Initializes other members. } When the getFilesFromDirectory parameter is null, the above GetFilesFromDirectory() method will be used as default. Download the directory’s XAP file list Now a public method can be created to start the downloading: /// <summary> /// Begins downloading the XAP files in the directory. /// </summary> public void DownloadAsync() { this.ThrowIfDisposed(); if (Interlocked.CompareExchange(ref this._state, State.DownloadStarted, State.Created) == 0) { this._webClient.Value.OpenReadCompleted += this.HandleOpenReadCompleted; this._webClient.Value.OpenReadAsync(this.Uri, this); } else { this.MutateStateOrThrow(State.DownloadCompleted, State.Initialized); this.OnDownloadCompleted(new AsyncCompletedEventArgs(null, false, this)); } } Here the HandleOpenReadCompleted() method is invoked when the file list HTML is downloaded. Download all XAP files After retrieving all files’ URIs, the next thing becomes even easier. HandleOpenReadCompleted() just uses built in DeploymentCatalog to download the XAPs, and aggregate them into one AggregateCatalog: private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { using (StreamReader reader = new StreamReader(e.Result)) { string html = reader.ReadToEnd(); IEnumerable<Uri> uris = this._getFilesFromDirectory(html); Contract.Assume(uris != null); IEnumerable<DeploymentCatalog> deploymentCatalogs = uris.Select(uri => new DeploymentCatalog(uri)); deploymentCatalogs.ForEach( deploymentCatalog => { this._aggregateCatalog.Catalogs.Add(deploymentCatalog); deploymentCatalog.DownloadCompleted += this.HandleDownloadCompleted; }); deploymentCatalogs.ForEach(deploymentCatalog => deploymentCatalog.DownloadAsync()); } } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } // Exception handling. } In HandleDownloadCompleted(), if all XAPs are downloaded without exception, OnDownloadCompleted() callback method will be invoked. private void HandleDownloadCompleted(object sender, AsyncCompletedEventArgs e) { if (Interlocked.Increment(ref this._downloaded) == this._aggregateCatalog.Catalogs.Count) { this.OnDownloadCompleted(e); } } Exception handling Whether this DirectoryCatelog can work only if the directory browsing feature is enabled. It is important to inform caller when directory cannot be browsed for XAP downloading. private void HandleOpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { Exception error = e.Error; bool cancelled = e.Cancelled; if (Interlocked.CompareExchange(ref this._state, State.DownloadCompleted, State.DownloadStarted) != State.DownloadStarted) { cancelled = true; } if (error == null && !cancelled) { try { // No exception thrown when browsing directory. Downloads the listed XAPs. } catch (Exception exception) { error = new InvalidOperationException(Resources.InvalidOperationException_ErrorReadingDirectory, exception); } } WebException webException = error as WebException; if (webException != null) { HttpWebResponse webResponse = webException.Response as HttpWebResponse; if (webResponse != null) { // Internally, WebClient uses WebRequest.Create() to create the WebRequest object. Here does the same thing. WebRequest request = WebRequest.Create(Application.Current.Host.Source); Contract.Assume(request != null); if (request.CreatorInstance == WebRequestCreator.ClientHttp && // Silverlight is in client HTTP handling, all HTTP status codes are supported. webResponse.StatusCode == HttpStatusCode.Forbidden) { // When directory browsing is disabled, the HTTP status code is 403 (forbidden). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_ClientHttp, webException); } else if (request.CreatorInstance == WebRequestCreator.BrowserHttp && // Silverlight is in browser HTTP handling, only 200 and 404 are supported. webResponse.StatusCode == HttpStatusCode.NotFound) { // When directory browsing is disabled, the HTTP status code is 404 (not found). error = new InvalidOperationException( Resources.InvalidOperationException_ErrorListingDirectory_BrowserHttp, webException); } } } this.OnDownloadCompleted(new AsyncCompletedEventArgs(error, cancelled, this)); } Please notice Silverlight 3+ application can work either in client HTTP handling, or browser HTTP handling. One difference is: In browser HTTP handling, only HTTP status code 200 (OK) and 404 (not OK, including 500, 403, etc.) are supported In client HTTP handling, all HTTP status code are supported So in above code, exceptions in 2 modes are handled differently. Conclusion Here is the whole DirectoryCatelog’s looking: Please click here to download the source code, a simple unit test is included. This is a rough implementation. And, for convenience, some design and coding are just following the built in AggregateCatalog class and Deployment class. Please feel free to modify the code, and please kindly tell me if any issue is found.

    Read the article

  • How do I set up live audio streams to a DLNA compliant device?

    - by Takkat
    Is there a way to stream the live output of the soundcard from our 12.04.1 LTS amd64 desktop to a DLNA-compliant external device in our network? Selecting media content in shared directories using Rygel, miniDLNA, and uShare is always fine - but so far we completely failed to get a live audio stream to a client via DLNA. Pulseaudio claims to have a DLNA/UPnP media server that together with Rygel is supposed to do just this. But we were unable to get it running. We followed the steps outlined in live.gnome.org, this answer here, and also in another similar guide. As soon as we select the local audio device, or our GST-Launch stream in the DLNA client Rygel displays the following message and the client states it reached the end of the playlist: (rygel:7380): Rygel-WARNING **: rygel-http-request.vala:97: Invalid seek request This is how we configured GST-Launch in rygel.conf: [GstLaunch] enabled=true launch-items=mypulseaudiosink mypulseaudiosink-title=Audio on @HOSTNAME@ mypulseaudiosink-mime=audio/x-wav mypulseaudiosink-launch=pulsesrc device=<device> ! wavpackenc For <device> we tried with the default sink name, this name appended with .monitor, and in addition with upnp-sink and upnp.monitor that was created when we selected DLNA media server from paprefs. We also tried to encode using lamemp3enc with no luck. These are our pulseaudio modules: http://paste.ubuntu.com/1202913/ These are our sinks: http://paste.ubuntu.com/1202916/ Did we miss any other additional configuration needed to get this running? Are there any other alternatives for sending the audio of our soundcard as live stream to a DLNA client?

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Need help trouble shooting Https webserver error - SSL Handshake failed

    - by DerNalia
    I followed this guide: http://hints.macworld.com/article.php?story=20041129143420344 Here is my virtual host definition <VirtualHost *:443> SSLEngine on SSLProxyEngine On RequestHeader set Front-End-Https "On" CacheDisable * SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL DocumentRoot "/Users/me/projects/myproject/public" ServerName ssl.mydomain.com ServerAlias *.ssl.mydomain.com SSLCertificateKeyFile "/private/etc/apache2/certs/webserver.nopass.key" SSLCertificateFile "/private/etc/apache2/certs/newcert.pem" SSLCACertificateFile "/private/etc/apache2/certs/demoCA/cacert.pem" SSLCARevocationPath "/private/etc/apache2/certs/demoCA/crl" ErrorLog "/Users/me/Desktop/ssl.log" ProxyPass / https://localhost:3002/ ProxyPassReverse / https://localhost:3002 ProxyPreserveHost on </VirtualHost> And when I try connecting to the sevre viov the web browser, I get this error: [Thu Feb 02 16:50:40 2012] [error] (502)Unknown error: 502: proxy: pass request body failed to 127.0.0.1:3002 (localhost) [Thu Feb 02 16:50:40 2012] [error] [client 96.11.81.39] proxy: Error during SSL Handshake with remote server returned by /session/new [Thu Feb 02 16:50:40 2012] [error] proxy: pass request body failed to 127.0.0.1:3002 (localhost) from 96.11.81.39 () how do I debug / fix this?

    Read the article

  • Upload a Signed Certificate to Amazon EC2

    - by Tam Minh
    I'm very new to Amazon EC2. I am trying to setup https for my website, I follow the offical instruction from amazon doc: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html And I get stuck at Upload the Signed Certificate step aws iam upload-server-certificate --server-certificate-name <certificate_object_name> --certificate-body <public_key_certificate_file> --private-key <privatekey.pem> --certificate-chain <certificate_chain_file> As a instruction, I just create a private key (privatekey.pem) and A Certificate Signing Request (csr.pem), but in the command line they request 4 params 1. certificate_object_name 2. public_key_certificate_file 3. *private-key --> I only have this one* 4. certificate_chain_file I don't know where to get 3 remain params, please help to shed a light. Thank you in advance.

    Read the article

  • GWB | Got Geekswithblogs Suggestions? Try UserVoice

    - by Staff of Geeks
    We have struggled in the past with different approaches of getting feedback from you as bloggers.  We really want to know what you would like to see, what other systems have that is helpful, and where we need to grow.  This community is made up of many different individuals so the system for feedback needed a voting or liking tool for us to gage what was a popular thought or just one guys request.  We would love to put every request in, but that would make the system function for some and unusable for others. This is where UserVoice comes in.  In a suggestion of features, Martin Hinshelwood suggested we give UserVoice a chance.  He had used it with other projects and sites and thought it would be a good feedback tool for Geekswithblogs.net.  We tried it out and agreed.  Give it a try and let us know what you want to see on Geekswithblogs.net and vote on other suggestions.  Feedback is key to the success of this community and we would love to hear what you have to say.   UserVoice for Geekswithblogs.net Feedback   Technorati Tags: UserVoice,Geekswithblogs,Feedback,Community

    Read the article

  • Snmpd update interface counters slowly or something like this

    - by Korjavin Ivan
    I update one my freebsd box to 9-stable (totally new installation) and install net-snmp for monitoring. uname -r 9.1-PRERELEASE pkg_info net-snmp-5.7.1_7 Information for net-snmp-5.7.1_7: Comment: An extendable SNMP implementation .... cat /var/db/ports/net-snmp/options # This file is auto-generated by 'make config'. # Options for net-snmp-5.7.1_7 _OPTIONS_READ=net-snmp-5.7.1_7 _FILE_COMPLETE_OPTIONS_LIST= IPV6 MFD_REWRITES PERL PERL_EMBEDDED PYTHON DUMMY TKMIB DMALLOC MYSQL AX_SOCKONLY UNPRIVILEGED OPTIONS_FILE_UNSET+=IPV6 OPTIONS_FILE_UNSET+=MFD_REWRITES OPTIONS_FILE_SET+=PERL OPTIONS_FILE_SET+=PERL_EMBEDDED OPTIONS_FILE_UNSET+=PYTHON OPTIONS_FILE_SET+=DUMMY OPTIONS_FILE_UNSET+=TKMIB OPTIONS_FILE_SET+=DMALLOC OPTIONS_FILE_UNSET+=MYSQL OPTIONS_FILE_UNSET+=AX_SOCKONLY OPTIONS_FILE_UNSET+=UNPRIVILEGED I have about 500 vlan on this machine, and collect info about interface through snmpd to 2 different software, zabbix and cacti. And both of them plot the graphs with blank fields. I tryed change polling time in zabbix, from 15, sec to 30,60,90,120,10. And anyway i have blank fields. snmpd.conf is empty - only a access controls. This configuration worked fine on freebsd 8. Where is my fault? How fix this graphs? UPD: Changing pooling time, switch off one of agent, doesnt help. I look at zabbix log (recieved data from snmpd) and see that: sorry for russian locale, just look at numbers: and thats is not true, as my "iftop" show speed was about 90Mbits, but snmpd return 2Mbits. I understand that snmpd doesnt return speed, it return just a counter. But how its possible? why 2Mbit/s ? I tryed recompile snmpd with 64-bit counters, and without it. In both variants this blank fields present. So i think its my OS (freebsd) doesnt update interface counters well. I still collect tcpdump for found this request/response. But have problem with that, to much trash. UPD2: I decrypt tcpdump-ed file, and public this as google doc at gdocfile Timediff looks strange.. Like zabbix sometimes "forget" do request, and then do twice at row, ehh UPD3: I parse log from command "while true; do netstat -bin -I vlan4008 /var/log/netstat; sleep 300; done" and load as google docs, and add formula for speed : link Looks like all counters in OS are good. Now i think problem in : 1. zabbix get request twice at row (and what about cacti) 2. snmpd use counter32

    Read the article

  • ISPConfig 3 SSL automatic rewrite

    - by lol
    I was wondering how you could get apache2 to redirect http://server.com:8080 to https://server.com:8080 - I have an ISPConfig 3 setup and the http://server.com:8080 virtual host currently prints a 400 back request error given that I've tried adding RewriteEngine on RewriteCond %{HTTPS} !^on$ [NC] RewriteRule . https://%{HTTP_HOST}:8080%{REQUEST_URI} [L] to the ispconfig.vhost file (and reloading the conf) with no success --edit!-- I've been playing around with it and adding an 'always redirect to google' into the ispconfig vhost and it works once you've already started talking ssl to it. this means the non-ssl connections are getting 'bad request errors' before the vhost is loaded... but where...? --edit 2!-- nope, the ssl is handled exclusively by the virtual host - if I turn off the ssl engine then the rewriting works perfectly (but obviously there is no ssl at https://) thanks!

    Read the article

  • What is the correct way to deal with similar but independent features?

    - by Koviko
    Let's say we have a feature request come in and we begin work on it, which we'll call feature-1. It introduces some new logic to the application, which we'll call logic-A and logic-B. A programmer branches from the release branch and begins work on the feature. Soon after, we get another feature request, which we'll call feature-2. It will implement logic-A and logic-C into the application. The logic A being implemented by this feature is the same logic-A as was implemented in feature-1. Let's also say that given logic-B, logic-A might be implemented slightly differently than it would have been given logic-C, and also differently given both logic-B and logic-C (eg. with only one feature, the code would be less flexible than with both). How should this situation be handled? Concrete Example (to help with any confusion in my wording) feature-1 is a feed from programmers.stackexchange.com. feature-2 is a feed from gaming.stackexchange.com. logic-A is the implementation of a feed at all (assuming the application currently has no feeds), which links to the content as well and gives related information. logic-B is that the feed's source is from programmers.stackexchange.com. Adds to logic-A that the related programming language is displayed. logic-C is that the feed's source is from gaming.stackexchange.com. Adds to logic-A that the related game's name and box art is displayed.

    Read the article

  • Setting up a very mixed Active Directory network to work with PowerShell Remote Administration

    - by erictheavg
    Summary: I want to be able to monitor the computers on my network, but don't need it to be automated. We're too small to purchase anything like MOM, but too big to do anything manually (~100 machines in two locations). I just keep running into issues, and was wondering if there's a master list of Group Policy settings I can distribute to my environment to get Remote Powershell working. Environment: Our AD network is pretty mixed. The end users have XP SP3, Win 7, and Win 7 x64. The servers include Win2k3 SP2, Win2k8, Win2k8 x64, Win2k8 R2, and Win2k8 R2 x64. Details: I'm trying to get it to work with Remote Powershell, but I run into errors like the following: Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (:) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionStateBroken Then I go to the computer (Win2k3 SP2 server) and run winrm quickconfig per the recommendations via google, and it says: Make these changes [y/n]? y WinRM has been updated to receive requests. WinRM service started. WSManFault Message = The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". Error number: -2144108526 0x80338012 The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". That's right. It tells me to remedy my winrm quickconfig failure by running winrm quickconfig. I don't want to band-aid this project one google search at a time. I'm sure there is a step-by-step tutorial out there on how to set up a network for powershell remote administration. Does anyone know of one? Books are acceptable. Thanks in advance! I didn't think my question would get this long.

    Read the article

  • Configuring Nginx SSL alongside non-ssl

    - by user55145
    I'm trying to enable SSL on my current Nginx configuration, which works fine. However I'm wondering if it's possible to do this alongside HTTP, so that i do not need another server{} section which would just be a replication of the http section. I thought the following would work, however i get the below when accessing http:// 400 Bad Request The plain HTTP request was sent to HTTPS port Nginx Config: ssl_certificate /etc/nginx/ssl/domains.pem; ssl_certificate_key /etc/nginx/ssl/server.key; server { listen 80; listen 443; //other configuration }

    Read the article

  • Can I use nginx to start EC2 instances on demand?

    - by Gabe Hollombe
    TL;DR - Is there a way to make nginx act as an elastic load balancer that will spin up EC2 instances on demand, allowing for the case when periods of no demand mean no instances will be running? Longer explanation - I have an nginx server that proxy_pass'es requests to a server on EC2. This server doesn't get many requests, so I'd like to keep the server spun down during periods of inactivity (I already have a script to do this). Then, when the instance is spun down and nginx gets a request for that instance, it will time out when trying to get a response from it. At this point, can I somehow trigger a shell command on the server to use EC2's command line tools to spin up the instance, then re-try the user's request after it has started?

    Read the article

  • Server Application Unavailable ?

    - by suryasasidhar
    hi, i am a developer and i developed the web application(asp.net). It is working in my local server fine when i take new domain and upload the site in to that domain i am getting this error hi, After completion of my project. I placed in online the default page is coming but when i click on any link button it is giving this error can you help me. m3connect.in is url of my site and error is Server Application Unavailable The web application you are attempting to access on this web server is currently unavailable. Please hit the "Refresh" button in your web browser to retry your request. Administrator Note: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur.

    Read the article

  • Errors reported by "powercfg -energy"

    - by Tim
    Running "powercfg -energy" under Windows 7 command line, I received a report with following three errors: System Availability Requests:Away Mode Request The program has made a request to enable Away Mode. Requesting Process \Device\HarddiskVolume2\Program Files\Windows Media Player\wmpnetwk.exe CPU Utilization:Processor utilization is high The average processor utilization during the trace was high. The system will consume less power when the average processor utilization is very low. Review processor utilization for individual processes to determine which applications and services contribute the most to total processor utilization. Average Utilization (%) 49.25 Platform Power Management Capabilities:PCI Express Active-State Power Management (ASPM) Disabled PCI Express Active-State Power Management (ASPM) has been disabled due to a known incompatibility with the hardware in this computer. I was wondering for the first error, what does "enable away mode" mean? for the second, what utilization percentage of CPU is reasonable? for the third, what is "PCI Express Active-State Power Management (ASPM)"? How I can correct the three errors? Thanks and regards!

    Read the article

  • SuperMicro IPMI through OpenBSD PF Firewall

    - by thelsdj
    I'm trying to access a SuperMicro IPMI card that is behind an OpenBSD bridged firewall. A couple pieces of information: The OpenBSD firewall itself has a SuperMicro IPMI that I can access across the internet. The IPMI I'm trying to reach can be reached from behind the firewall. My gateway does arp request the IPMI and it does appear to respond (this is from the external interface of the firewall) 16:57:45.548892 arp who-has ipminame tell gwname 16:57:45.549500 arp reply ipminame is-at ipmimac But when I make a request to the IPMI IP from outside the firewall the external interface of the firewall shows no traffic with the IPMI ip as its destination. Any idea what might be causing this problem? Is there something about IPMI traffic that my gateway wouldn't like (the gateway is provided by my colocation provider so I can't easily debug it).

    Read the article

  • How to change the Target URL of EditForm.aspx, DispForm.aspx and NewForm.aspx

    - by Jayant Sharma
    Hi All, To changing the URL of ListForms we have very limited options. On Inernet if you search you will lots of article about Customization of List Forms using SharePoint Desinger, but the disadvantage of SharePoint Desinger is, you cannot create wsp of the customization means what ever changes you have done in UAT environment you need to repeat the same at Production. This is the main reason I donot like SharePoint Desinger more. I always prefer to create WSP file and Deployment of WSP file will do all of my work. Now, If you want to change the ListForm URL using Visual Studio, either you have create new List Defintion or Create New Content Type. I found some very good article and want to share.. http://www.codeproject.com/Articles/223431/Custom-SharePoint-List-Forms http://community.bamboosolutions.com/blogs/sharepoint-2010/archive/2011/05/12/sharepoint-2010-cookbook-how-to-create-a-customized-list-edit-form-for-development-in-visual-studio-2010.aspx Whenever you create, either List Defintion or Content type and specify the URL for List Forms it will automatically add ListID and ItemID (No ItemID in case of NewForm.aspx) in the URL as Querystring, so if you want to redirect it or do some logic you can, as you got the ItemID as well as ListID.

    Read the article

  • How to display workflow related tasks in the item display page where the workflow is currently running on in SharePoint2013

    - by ybbest
    In one of the project, I need to display workflow related tasks in the item display page where the workflow is currently running on. To achieve this, I’d like to add the tasks list view web part and using the connected web part to achieve this.(ID=workflowitemid) However, to make it work I need to unhide the workflowitemid field in the task list, as it is hidden field and also cantogglehidden field is set to false. I need to use reflection to change the cantogglehidden field to true as it only has getter in the API and then I am able to unhide the field. You can download the script here. However, it is not ideal (make your environment not supported by Microsoft) to display tasks this way. Another way to display the related task is to use SharePoint designer solution with List view web part and data source. Here are the steps. 1. Create a new list display form as below 2. Edit the custom display form in advanced mode. 3. Find the PlaceHolderMain contentplace hoder and insert the DataView by choosing the associated workflow tasks list as below 4. Go to the List View Tools >> OPTIONS 5. Create a Parameter called workflowitemId Parameter which retrieve the value from the ID querystring as below 6. Create a filter based on UIVersion = workflowitemId as below ,we are going to change the UIVersion to WorkflowItemId property later as WorkflowItemId is a hidden field and cannot be selected from the wizard. 7. Replace UIVersion with WorkflowItemId in the caml for the XsltListViewWebPart. From: TO 8. Go to the new custom display page at http://yourserver/Lists/aa/CustomDisplayPage.aspx?ID=414, you will see the associated tasks are showing in the page. References: http://office.microsoft.com/en-us/sharepoint-designer-help/watch-this-design-a-document-review-workflow-solution-HA010256417.aspx (Video 12 and 13)

    Read the article

  • NFS Client reports Permission Denied, Server reports Permission Granted

    - by VxJasonxV
    I have two RedHat 4 Servers. The client is 4.6, the server is 4.5. I'm attempting to mount a share from the server, onto the client via NFS. The /etc/exports configuration is as follows: /opt/data/config bkup(rw,no_root_squash,async) /opt/data/db bkup(rw,no_root_squash,async) exportfs returns these (among other) shares, nfs is running according to ps output. I've been attempting to use autofs on the client, but have opted to just mount the share manually considering the issues I'm having. So, I issue the mount request: mount dist:/opt/data/config /mnt/config mount: dist:/opt/data/config failed, reason given by server: Permission denied Ok, so let's see what the server has to say for itself. May 6 23:17:55 dist mountd[3782]: authenticated mount request from bkup:662 for /opt/data/config (/opt/data/config) It says it allowed the mount to take place. How can I diagnose why the client and server are disagreeing on the result?

    Read the article

  • robot hammering apache2

    - by user1571418
    My apache2 log is bombarded with lines like: 108.5.114.118 - - [03/Aug/2012:15:23:28 +0200] "GET http://xchecker.net/tmp_proxy2012/http/engine.php HTTP/1.0" 404 1690 "http://xchecker.net/tmp_proxy2012/http/engine.php" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Win 9x 4.90)" I am puzzled by this -- why is a request for some weird xchecker.net domain ending up on my server in the first place?! The request comes every few dozens of seconds, must be a robot. Any ideas what it is? Btw that URL is valid -- apparently it contains some test page...

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >