Search Results

Search found 14226 results on 570 pages for 'feature requests'.

Page 25/570 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • NDR for meeting requests

    - by Adam
    We've got a mailbox for each department (e.g [email protected] and [email protected]) and everyone in that department has access to it, access is granted using Exchange Management Console. If I send a calendar invite to [email protected], I get a Undeliverable report: Delivery has failed to these recipients or groups: User_A The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. User_B The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. User_C The e-mail address you entered couldn't be found. Check the address and try resending the message. If the problem continues, please contact your helpdesk. The users are no longer in AD or Exchange but we cannot find any mention of them within any deligates or permissions anywhere. We only started to get this problem AFTER we upgraded our DCs from Windows Server 2003 to Windows Server 2008 and Exchange server from Windows Server 2003, with Exchange 2005 to Windows Server 2008, with exchange 2010.

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • Returning "200 OK" in Apache on HTTP OPTIONS requests

    - by i..
    I'm attempting to implement cross-domain HTTP access control without touching any code. I've got my Apache(2) server returning the correct Access Control headers with this block: Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST, GET, OPTIONS" I now need to prevent Apache from executing my code when the browser sends a HTTP OPTIONS request (it's stored in the REQUEST_METHOD environment variable), returning 200 OK. How can I configure Apache to respond "200 OK" when the request method is OPTIONS? I've tried this mod_rewrite block, but the Access Control headers are lost. RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ $1 [R=200,L]

    Read the article

  • One specific VirtualHost in MAMP getting all the requests

    - by julien_c
    I'm pulling my hair out over a seemingly trivial issue... I'm using MAMP 2.0 and want to configure a Virtual Host for local development. Here's my httpd-vhosts.conf: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /Applications/MAMP/htdocs/mysite/public ServerName mysite.local </VirtualHost> As soon as I add the VirtualHost directive, every request to http://localhost gets redirected to the DocumentRoot specified by mysite.local. Why?

    Read the article

  • IIS get full error message for failed requests

    - by BetaRide
    I have IIS set-up and serving my webservice. Unfortunately if the webservice throws an exception, all I get is a blue box with the title failed request. What options do I have to actually see what went wrong? I'd prefer to get the exception message and a stack trace. I already set-up "Failed Request Tracing" but the directory remains empty. If possible I'd prefer to get the stack trace in the browser directly. Just if this matters: I have an IIS 7.5 on a Win 7 64 Pro box. The Webservice is a WCF C# project.

    Read the article

  • Windows Server 2008 IIS7 - page requests randomly hang or timeout

    - by seb835
    I've just installed a fresh copy of Windows Server 2008 x64 with IIS7 and PHP (fast CGI). However, I'm noticing that after moving my web site from a similarly specified machine to this one, I'm getting issues. The issue seems to be that randomly, as I'm clicking through the web pages being served, the browser will suddenly hang saying "waiting for mysite.com...". Sometimes the page will then timeout, or finally resolve after 20 to 30 seconds, but maybe missing the CSS style sheet. Very strange, as this is the only website installed on this new server, and only myself using/testing it. Server is installed in the same data centre below the old server. Has anyone else had a similar problem? I tried increasing the max workers pool to 10 (from 1), but this has no effect. Seems to happen most frequently after 1 or 2 minutes of inactivity on the website, then trying to load/refresh a web page. Many thank for any info and help. Kind Regards, Seb

    Read the article

  • Blocking specific IP requests

    - by user42908
    Hi, I own a VPS running Ubuntu with Apache stuff. Recently I am getting continous request from IP static-195.22.94.120.addr.tdcsong.se.54303 : 12337 I already installed the 'arno-iptables-firewall'. Have iptables blocking 195.22.94.120 Still then I get the request from that IP if i see via tcpdump. May I know what else i can do to protect my VPS? Thank you.

    Read the article

  • What is the meaning of the 'Personalities' feature under /proc/mdstat

    - by drcelus
    On some systems I see this : Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md1 : active raid1 sdb1[1] sda1[0] 10485696 blocks [2/2] [UU] md2 : active raid1 sdb2[1] sda2[0] 477371328 blocks [2/2] [UU] And other systems show : Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204788 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 4193272 blocks super 1.1 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 483985276 blocks super 1.1 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk I wonder what is the meaning of Personalities and the impact of having different values.

    Read the article

  • Technical details for Server 2012 de-duplication feature

    - by syneticon-dj
    Now that Windows Server 2012 comes with de-duplication features for NTFS volumes I am having a hard time finding technical details about it. I can deduce from the TechNet documentation that the de-duplication action itself is an asynchronous process - not unlike how the SIS Groveler used to work - but there is virtually no detail about the implementation (algorithms used, resources needed, even the info on performance considerations is nothing but a bunch rule-of-thumb-style recommendations). Insights and pointers are greatly appreciated, a comparison to Solaris' ZFS de-duplication efficiency for a set of scenarios would be wonderful.

    Read the article

  • IPv6 feature in Network Adaptor is Slowing Internet

    - by Teknophilia
    The past few days, my internet browsing has become very poor. It's not a matter of speed, as a speed test will give at least 15Mbps. It seems as if my laptop has a hard time actually connecting to the sites. I've found a possible culprit, but don't know why it would affect anything: Going to adapter settings and disabling ipv6, but leaving ipv4, my browsing is back to normal. Re-enabling ipv6 brings back the issue. This is strange though, because I have always had ipv6 enabled. Moreover, using sites to test ipv6 compatibility, I fail with ipv6 enabled on my adapter, and pass when it's disabled. Ideas about why this is happening, and how to fix it?

    Read the article

  • IPTABLES syntax help to forward Remote Desktop requests to a VM [CentOS host]

    - by NVRAM
    I've a VM running MSWindows XP hosted on my CentOS 5.4 machine. I can rdesktop into it from the hosting machine and work just fine using the private ddress (192.168.122.65), but I now need to allow Remote Desktop access from other computers (not just the machine hosting the VM). [Edit] I only need to allow access for a day or so, so don't want to add a NIC (for XP activation reasons). Could someone help me with the iptables syntax? The VM is on a private/virtual network: 192.168.122.65 and my CentOS machine is on a physical network, at 10.1.3.38 (and 192.168.122.1 as the GW for the virtual net). I found this question, but none of the answers seemed to work and I'm a bit timid at blindly trying variations. My FORWARD rules are as listed. Thanks in advance. # iptables -L FORWARD Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable RH-Firewall-1-INPUT all -- anywhere anywhere [Edit] If I do play "blindly" is there a simple way to reset the settings on CentOS (a la service network restart)?

    Read the article

  • Preventing 304 Not Modified Requests with nginx

    - by ustun
    I am running nginx, and have the following block for expiration: expires 52w; However when I use Google Chrome Developer Tools to observe network traffic, some of the assets are loaded from cache (200-from cache) while most of the assets are making a request to the server (304 Not Modified). I want to load all assets from cache without communicating with the server if possible. (200-from cache) What would be the required change in my nginx configuration?

    Read the article

  • Restarting nginx backends without losing requests

    - by Oli
    I'm sure it's been asked before in different words but I run several Django sites via uwsgi (emporer mode) behind nginx. It's all a fairly standard configuration but I find that if I restart the central uwsgi process, nginx just bombs out 502s rather than waiting for the socket to become available. I recognise that most of this is probably for a reason but people seeing 502 errors really stings me. It's certainly not something I want a client to see. So... Can I beg nginx to wait/retry backends? Or, Is there anything (other than the obvious) I can do to minimise commercial damage from uwsgi restarts?

    Read the article

  • Http Requests POST vs GET

    - by behrk2
    Hi everyone, I am using a lot of HTTP Requests in an application that I am writing which uses OAuth. Currently, I am sending my GET and POST requests the same way: HttpConnection connection = (HttpConnection) Connector.open(url + connectionParameters); connection.setRequestMethod(method); connection.setRequestProperty("WWW-Authenticate", "OAuth realm=api.netflix.com"); int responseCode = connection.getResponseCode(); And this is working fine. I am successfully POSTing and GETing. However, I am worried that I am not doing POST the right way. Do I need to include in the above code the following if-statement? if (method.equals("POST") && postData != null) { connection.setRequestProperty("Content-type", "application/x-www-form-urlencoded"); connection.setRequestProperty("Content-Length", Integer .toString(postData.length)); OutputStream requestOutput = connection.openOutputStream(); requestOutput.write(postData); requestOutput.close(); } If so, why? What's the difference? I would appreciate any feedback. Thanks!

    Read the article

  • Aggregating / Collecting AJAX requests

    - by Ganesh Shankar
    I have situation where a user can manipulate a large set of data (presented in a table) by using a bunch of filters represented as checkboxes. The page is AJAXed up so the user doesn't have to wait for a full page refresh every time they click a filter. The way it's currently implemented is by having an event handler watch all the checkboxes and request filtered data from the server when a click event is triggered. This works fine. However, there is a usability & performance issue with doing it this way. For example, if a user clicks 6 checkboxes, 6 AJAX requests are triggered and they all come back at various intervals causing the page to be updated 6 times. This will most probably annoy the user and seems rather inefficient. I want to put some kind of timeout on the event handler to do something like this: "Wait for 1 second and if there are no more filters clicked trigger the AJAX request". However, at the moment I've only been able to delay all 6 requests by 1 second. I'm not sure how to aggregate / collect the filter info into 1 AJAX request. Any suggestions would be greatly appreciated!

    Read the article

  • memcache is not storing data accross requests

    - by morpheous
    I am new to using memcache, so I may be doing something wrong. I have written a wrapper class around memcache. The wrapper class has only static methods, so is a quasi singleton. The class looks something like this: class myCache { private static $memcache = null; private static $initialized = false; public static function init() { if (self::$initialized) return; self::$memcache = new Memcache(); if (self::configure()) //connects to daemon { self::store('foo', 'bar'); } else throw ConnectionError('I barfed'); } public static function store($key, $data, $flag=MEMCACHE_COMPRESSED, $timeout=86400) { if (self::$memcache->get($key)!== false) return self::$memcache->replace($key, $data, $flag, $timeout); return self::$memcache->set($key, $data, $flag, $timeout); } public static function fetch($key) { return self::$memcache->get($key); } } //in my index.php file, I use the class like this require_once('myCache.php'); myCache::init(); echo 'Stored value is: '. myCache::fetch('foo'); The problem is that the myCache::init() method is being executed in full everytime a page is requested. I then remembered that static variables do not maintain state accross page requests. So I decided instead, to store the flag that indicates whether the server contains the start up data (for our purposes, the variable 'foo', with value 'bar') in memcache itself. Once the status flag is stored in memcache itself, It solves the problem of the initialisation data being loaded for every page request (which quite frankly, defeats the purpose of memcache). However, having solved that problem, when I come to fetch the data in memcache, it is empty. I dont understand whats going on. Can anyone clarify how I can store my data once and retrieve it accross page requests? BTW, (just to clarify), the get/set is working correctly, and if I allow memcache to load the initialisation data for each page request, (which is silly), then the data is available in memcache.

    Read the article

  • Cache AJAX requests

    - by Willem
    I am sending AJAX GET-requests to a PHP application and would like to cache the request returns for later use. Since I am using GET this should be possible because different requests request different URLs (e.g. getHTML.php?page=2 and getHTML.php?page=5). What headers do I need to declare in the PHP-application to make the clients browser cache the request URL content in a proper way? Do I need to declare anything in the Javascript which handles the AJAX-request (I am using jQuery's $.ajax function which has a cache parameter)? How would I handle edits which change the content of e.g. getHTML.php?page=2 so that the client doesn't fall back to the cached version? Adding another parameter to the GET request e.g. getHTML.php?page=2&version=2 is not possible because the link to the requested URL is created automatically without any checking (which is preferably the way I want it to be). How will the browser react when I try to AJAX-request a cached request URL? Will the AJAX-request return success immediately? Thanks Willem

    Read the article

  • Limit TCP requests per IP

    - by asmo
    Hello! I'm wondering how to limit the TCP requests per client (per specific IP) in Java. For example, I would like to allow a maximum of X requests per Y seconds for each client IP. I thought of using static Timer/TimerTask in combination with a HashSet of temporary restricted IPs. private static final Set<InetAddress> restrictedIPs = Collections.synchronizedSet(new HashSet<InetAddress>()); private static final Timer restrictTimer = new Timer(); So when a user connects to the server, I add his IP to the restricted list, and start a task to unrestrict him in X seconds. restrictedIPs.add(socket.getInetAddress()); restrictTimer.schedule(new TimerTask() { public void run() { restrictedIPs.remove(socket.getInetAddress()); } }, MIN_REQUEST_INTERVAL); My problem is that at the time the task will run, the socket object may be closed, and the remote IP address won't be accessible anymore... Any ideas welcomed! Also, if someone knows a Java-framework-built-in way to achieve this, I'd really like to hear it.

    Read the article

  • How to combine apache requests?

    - by Bruce
    To give you the situation in abstract: I have an ajax client that often needs to retrieve 3-10 static documents from the server. Those 3-10 documents are selected by the client out of about 100 documents in total. I have no way of knowing in advance which 3-10 documents the client will require. Additionally, those 100 documents are generated from database content, and so change over time. It seems messy to me to have to make 10 ajax requests for 10 separate documents. My first thought was to write a jsp that could use the include action. ie in pseudo code for (param in params){ jsp:include page="[param]" } But it turns out the tomcat doesn't just include the html resource, it recompiles it, generating a class file every time, which also seems wasteful. Does any one know of a neat solution for combining apache requests to static files to make one request, rather than several, but without the overhead of, for example, tomcat generating extra class files for each static file and regenerating them each time the static file changes? Thanks! Hopefully my question is clear - it's a bit long-winded.

    Read the article

  • Asynchronous Controller is blocking requests in ASP.NET MVC through jQuery

    - by Jason
    I have just started using the AsyncController in my project to take care of some long-running reports. Seemed ideal at the time since I could kick off the report and then perform a few other actions while waiting for it to come back and populate elements on the screen. My controller looks a bit like this. I tried to use a thread to perform the long task which I'd hoped would free up the controller to take more requests: public class ReportsController : AsyncController { public void LongRunningActionAsync() { AsyncManager.OutstandingOperations.Increment(); var newThread = new Thread(LongTask); newThread.Start(); } private void LongTask() { // Do something that takes a really long time //....... AsyncManager.OutstandingOperations.Decrement(); } public ActionResult LongRunningActionCompleted(string message) { // Set some data up on the view or something... return View(); } public JsonResult AnotherControllerAction() { // Do a quick task... return Json("..."); } } But what I am finding is that when I call LongRunningAction using the jQuery ajax request, any further requests I make after that back up behind it and are not processed until LongRunningAction completes. For example, call LongRunningAction which takes 10 seconds and then call AnotherControllerAction which is less than a second. AnotherControllerAction simply waits until LongRunningAction completes before returning a result. I've also checked the jQuery code, but this still happens if I specifically set "async: true": $.ajax({ async: true, type: "POST", url: "/Reports.aspx/LongRunningAction", dataType: "html", success: function(data, textStatus, XMLHttpRequest) { // ... }, error: function(XMLHttpRequest, textStatus, errorThrown) { // ... } }); At the moment I just have to assume that I'm using it incorrectly, but I'm hoping one of you guys can clear my mental block!

    Read the article

  • .htaccess - alias all www-only requests to subdirectory

    - by CodeMoose
    Trying to install the wonderful Concrete5 CMS to use as my main site engine - the problem is it has about 15 different files and directories, and they clutter up my root. I'd really like to move it to a /_concrete/ subdirectory, and still maintain it in the domain root. htaccess has never been my strong suit - after a lot of research and learning, and a lot of error 500s, my frustration is overriding my pride and I'm posting here. Here's exactly what I'm trying to accomplish: Any requests that come through www.domain.com are forwarded to www.domain.com/_concrete/, except in the case of an existing file. The end-user URL shouldn't change - they will still see the site as www.domain.com, even though they're being served www.domain.com/_concrete/. Multiple subdomains exist on this site as sub-folders within the root - thus, only requests coming through www.domain.com should be redirected. Here's the closest I got with my htaccess, which produces an error 500: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www\.)?domain\.com [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !^_concrete RewriteRule ^(.*)$ _concrete/$1 [L,QSA] This is the result of 4 hours of sweat and blood (mostly blood), so I have to be close. I'm hoping one of your fine minds can point out a stupid mistake and put this thing to rest swiftly. Thanks for your time! Addendum: I previously posted .htaccess - alias domain root to subfolder a while ago, which got me started. Please don't fall into the trap of thinking it's a duplicate.

    Read the article

  • Conditionally install feature not working in Wix

    - by Damien
    Hi, I have a setup which I need to support on IIS6 and IIS7. For now Im using the built in IIS extensions for IIS6 like so: <Component Id="C_IISApplication" Guid="{9099909C-B770-4df2-BE08-E069A718B938}" > <iis:WebSite Id='TSIWSWebSite' Description='TSWeb' SiteId='*' Directory='INSTALLDIR'> <iis:WebAddress Id='tcpAddress' Port='8081' /> </iis:WebSite> <iis:WebAppPool Id="BlahWSApplicationPool" Name="Blah" /> <iis:WebVirtualDir Id="VirtualDir" Alias="Blah" Directory="INSTALLDIR" WebSite="BlahWSWebSite" DirProperties="WebVirtualDirProperties"> <iis:WebApplication Id="WebApplication" Name="Blah" WebAppPool="BlahWSApplicationPool"/> </iis:WebVirtualDir> </Component> I have tried a condition in the features like so: <Feature Title="IIS6" Id="IIS6" Description="IIS6" ConfigurableDirectory="INSTALLDIR" Level="1" Absent="disallow" Display="hidden"> <ComponentRef Id="C_IISApplication" /> <Condition Level="0"><![CDATA[IISVERSION <> '#6']]></Condition> </Feature> No matter what the value of my condition, the metabase stuff gets executed and I get an error on IIS7 systems. I have also tried putting the condition in the component and that didnt work either. Is there something wrong with my usage?

    Read the article

  • Call long running operation in WSS feature OnActivated Event

    - by dirq
    More specifically - How do I reference SPContext in Web Service with [SoapDocumentMethod(OneWay=true)]? We are creating a feature that needs to run a job when a site is created. The job takes about 4 minutes to complete. So, we made a web service that we can call when the feature is activated. This works but we want it to run asynchronously now. We've found the SoapDocumentMethod's OneWay property and that would work awesomely but the SPContext is now NULL. We have our web services in the _vti_bin virtual directory so it's available in each Windows Sharepoint Services site. I was using the SPContext.Current.Web to get the site and perform the long running operation. I wanted to just fire and forget about it by returning a soap response right away and letting the process run. How can I get the current SPContext? I used to be able to do this in my web service: SPWeb mySite = SPContext.Current.Web; Can I get the same context when I have the [SoapDocumentMethod(OneWay=true)] attribute applied to my web service? Or must I recreate the SPWeb from the url? This is similar to this thread: http://stackoverflow.com/questions/340192/webservice-oneway-and-new-spsitemyurl Update: I've tried these two ways but they didn't work: SPWeb targetSite = SPControl.GetContextWeb(this.Context); SPWeb targetSite2 = SPContext.GetContext(this.Context).Web;

    Read the article

  • google maps api : internal server error when inserting a feature

    - by user142764
    Hi, I try to insert features on a custom google map : i use the sample code from the doc but i get a ServiceException (Internal server error) when i call the service's insert method. Here is what i do : I create a map and get the resulting MapEntry object : myMapEntry = (MapEntry) service.insert(mapUrl, myEntry); This works fine : i can see the map i created in "my maps" on google. I use the feed url from the map to insert a feature : final URL featureEditUrl = myMapEntry.getFeatureFeedUrl(); I create a kml string using the sample from the doc : String kmlStr = "< Placemark xmlns=\"http://www.opengis.net/kml/2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; And when i call the insert method i get an internal server error. I must be doing something wrong but i cant see what, can anybody help ? Here is the complete code i use : public void doCreateFeaturesFormap(MapEntry myMap) throws ServiceException, IOException { final URL featureEditUrl = myMap.getFeatureFeedUrl(); FeatureEntry featureEntry = new FeatureEntry(); try { String kmlStr = "<Placemark xmlns=\"http://www.opengis.net/kml/ 2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; XmlBlob kml = new XmlBlob(); kml.setFullText(kmlStr); featureEntry.setKml(kml); featureEntry.setTitle(new PlainTextConstruct("Feature Title")); } catch (NullPointerException e) { System.out.println("Error: " + e.getClass().getName()); } FeatureEntry myFeature = (FeatureEntry) service.insert( featureEditUrl, featureEntry); } Thanks in advance, Vincent.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >