Search Results

Search found 37180 results on 1488 pages for 'proxy pass request failed'.

Page 115/1488 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • nginx : backend https, proxy_pass shows ip

    - by Vulpo
    I am using nginx as a reverse proxy listening at port 80 (http). I am using proxy_pass to forward requests to backend http and https servers. Everything works fine for my http server but when I try to reach the https server through nginx reverse proxy the ip of the https server is shown in the client's web browser. I want the uri of the nginx server to be shown instead of the https backend server's ip (once again, this works fine with the http server but not for the https server). See this post on the forum Here is my configuration file : server { listen 80; server_name domain1.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpServer:port/; } } server { listen 80; server_name domain2.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpsServer:port/; proxy_set_header X_FORWARDED_PROTO https; #proxy_set_header Host $http_host; } } When I try the "proxy_set_header Host $http_host" directive and "proxy_set_header Host $host" the web page can't be reached (page not found). But when I comment it, the ip of the https server is shown in the browser (which is bad). Does anyone have an idea ? My other configs files are : proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; user www-data; worker_processes 2; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; server_names_hash_bucket_size 64; sendfile off; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_min_length 0; gzip_types text/plain text/html text/css image/x-icon application/x-javascript; gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks for your help !

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • Detecting dead proxies

    - by Afnan
    Is it possible to detect which proxy is active which is dead? using c# and a combo box containing list of proxies with port number is there any way we take every proxy one by one and determine as if it was dead or active? Microsoft.Win32.RegistryKey registry = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings", true); registry.SetValue("ProxyEnable", 1); registry.SetValue("ProxyServer", comboBox1.Text) ;

    Read the article

  • How to handle requsts from IIS to Apache?

    - by omoto
    Hi all! I need to create something like proxy between IIS and Apache So hostheaders I would like to set up on IIS because it's Windows 2003 Server but I have some applications that should be hosted under Apache. In this case I think that I should set up something like proxy... Any ideas?

    Read the article

  • JSF2 - use view scope managed bean to pass value between navigation

    - by Fekete Kamosh
    Hi all, I am solving how to pass values from one page to another without making use of session scope managed bean. For most managed beans I would like to have only Request scope. I created a very, very simple calculator example which passes Result object resulting from actions on request bean (CalculatorRequestBean) from 5th phase as initializing value for new instance of request bean initialized in next phase lifecycle. In fact - in production environment we need to pass much more complicated data object which is not as primitive as Result defined below. What is your opinion on this solution which considers both possibilities - we stay on the same view or we navigate to the new one. But in both cases I can get to previous value stored passed using view scoped managed bean. Calculator page: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html"> <h:head> <title>Calculator</title> </h:head> <h:body> <h:form> <h:panelGrid columns="2"> <h:outputText value="Value to use:"/> <h:inputText value="#{calculatorBeanRequest.valueToAdd}"/> <h:outputText value="Navigate to new view:"/> <h:selectBooleanCheckbox value="#{calculatorBeanRequest.navigateToNewView}"/> <h:commandButton value="Add" action="#{calculatorBeanRequest.add}"/> <h:commandButton value="Subtract" action="#{calculatorBeanRequest.subtract}"/> <h:outputText value="Result:"/> <h:outputText value="#{calculatorBeanRequest.result.value}"/> <h:outputText value="DUMMY" rendered="#{resultBeanView.dummy}"/> </h:panelGrid> </h:form> </h:body> Object to be passed through lifecycle: package cz.test.calculator; import java.io.Serializable; /** * Data object passed among pages. * Lets imagine it holds something much more complicated than primitive int */ public class Result implements Serializable { private int value; public void setValue(int value) { this.value = value; } public int getValue() { return value; } } Request scoped managed bean used on view "calculator.xhtml" package cz.test.calculator; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ManagedProperty; import javax.faces.bean.RequestScoped; @ManagedBean @RequestScoped public class CalculatorBeanRequest { @ManagedProperty(value="#{resultBeanView}") ResultBeanView resultBeanView; private Result result; private int valueToAdd; /** * Should perform navigation to */ private boolean navigateToNewView; /** Creates a new instance of CalculatorBeanRequest */ public CalculatorBeanRequest() { } @PostConstruct public void init() { // Remember already saved result from view scoped bean result = resultBeanView.getResult(); } // Dependency injections public void setResultBeanView(ResultBeanView resultBeanView) { this.resultBeanView = resultBeanView; } public ResultBeanView getResultBeanView() { return resultBeanView; } // Getters, setter public void setValueToAdd(int valueToAdd) { this.valueToAdd = valueToAdd; } public int getValueToAdd() { return valueToAdd; } public boolean isNavigateToNewView() { return navigateToNewView; } public void setNavigateToNewView(boolean navigateToNewView) { this.navigateToNewView = navigateToNewView; } public Result getResult() { return result; } // Actions public String add() { result.setValue(result.getValue() + valueToAdd); return isNavigateToNewView() ? "calculator" : null; } public String subtract() { result.setValue(result.getValue() - valueToAdd); return isNavigateToNewView() ? "calculator" : null; } } and finally view scoped managed bean to pass Result variable to new page: package cz.test.calculator; import java.io.Serializable; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ViewScoped; import javax.faces.context.FacesContext; @ManagedBean @ViewScoped public class ResultBeanView implements Serializable { private Result result = new Result(); /** Creates a new instance of ResultBeanView */ public ResultBeanView() { } @PostConstruct public void init() { // Try to find request bean ManagedBeanRequest and reset result value CalculatorBeanRequest calculatorBeanRequest = (CalculatorBeanRequest)FacesContext.getCurrentInstance().getExternalContext().getRequestMap().get("calculatorBeanRequest"); if(calculatorBeanRequest != null) { setResult(calculatorBeanRequest.getResult()); } } /** No need to have public modifier as not used on view * but only in managed bean within the same package */ void setResult(Result result) { this.result = result; } /** No need to have public modifier as not used on view * but only in managed bean within the same package */ Result getResult() { return result; } /** * To be called on page to instantiate ResultBeanView in Render view phase */ public boolean isDummy() { return false; } }

    Read the article

  • Setting custom SQL in django admin

    - by eugene y
    I'm trying to set up a proxy model in django admin. It will represent a subset of the original model. The code from models.py: class MyManager(models.Manager): def get_query_set(self): return super(MyManager, self).get_query_set().filter(some_column='value') class MyModel(OrigModel): objects = MyManager() class Meta: proxy = True Now instead of filter() I need to use a complex SELECT statement with JOINS. What's the proper way to inject it wholly to the custom manager?

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • Problem with Entity Framework : "The underlying provider failed on Open"

    - by pokrate
    Hi, When I try to insert a record, I get this error : The underlying provider failed on Open. This error occurs only with IIS and not with VWD 2008's webserver. In the EventViewer I get this Application Error : Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance. The connection will be closed. [CLIENT: ] <add name="ASPNETDBEntities" connectionString="metadata=res://*/Models.FriendList.csdl|res://*/Models.FriendList.ssdl|res://*/Models.FriendList.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\ASPNETDB.MDF;Integrated Security=True;Connect Timeout=30;User Instance=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> I am using aspnetdb.mdf file, and not any external database. I have searched enough for this, but no use. Everything works fine with VWD webserver

    Read the article

  • Unity.ResolutionFailedException - Resolution of the dependency failed

    - by Anibas
    I have the following code: public static IEngine CreateEngine() { UnityContainer container = Unity.LoadUnityContainer(DefaultStrategiesContainerName); IEnumerable<IStrategy> strategies = container.ResolveAll<IStrategy>(); ITraderProvider provider = container.Resolve<ITraderProvider>(); return new Engine(provider, new List<IStrategy>(strategies)); } and the config: <unity> <typeAliases> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="weakRef" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="Strategy" type="ADTrader.Core.Contracts.IStrategy, ADTrader.Core" /> <typeAlias alias="Trader" type="ADTrader.Core.Contracts.ITraderProvider, ADTrader.Core" /> </typeAliases> <containers> <container name="strategies"> <types> <type type="Strategy" mapTo="ADTrader.Strategies.ThreeTurningStrategy, ADTrader.Strategies" name="1" /> <type type="Trader" mapTo="ADTrader.MbTradingProvider.MBTradingProvider, ADTrader.MbTradingProvider" /> </types> </container> </containers></unity> I am getting the following exception: Microsoft.Practices.Unity.ResolutionFailedException: Resolution of the dependency failed, type = "ADTrader.Core.Contracts.ITraderProvider", name = "". Exception message is: The current build operation (build key Build Key[ADTrader.MbTradingProvider.MBTradingProvider, null]) failed: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Strategy type BuildPlanStrategy, index 3) --- Microsoft.Practices.ObjectBuilder2.BuildFailedException: The current build operation (build key Build Key[ADTrader.MbTradingProvider.MBTradingProvider, null]) failed: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Strategy type BuildPlanStrategy, index 3) --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at MBTCOMLib.MbtComMgrClass.EnableSplash(Boolean bEnable) at ADTrader.MbTradingProvider.MBTradingProvider..ctor() at BuildUp_ADTrader.MbTradingProvider.MBTradingProvider(IBuilderContext ) at Microsoft.Practices.ObjectBuilder2.DynamicMethodBuildPlan.BuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) --- End of inner exception stack trace --- at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.Builder.BuildUp(IReadWriteLocator locator, ILifetimeContainer lifetime, IPolicyList policies, IStrategyChain strategies, Object buildKey, Object existing) at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) --- End of inner exception stack trace --- at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) at Microsoft.Practices.Unity.UnityContainer.Resolve(Type t, String name) at Microsoft.Practices.Unity.UnityContainerBase.ResolveT at ADTrader.Engine.EngineFactory.CreateEngine() Any idea?

    Read the article

  • C# WCF - Failed to invoke the service.

    - by Keith Barrows
    I am getting the following error when trying to use the WCF Test Client to hit my new web service. What is weird is every once in awhile it will execute once then start popping this error. Failed to invoke the service. Possible causes: The service is offline or inaccessible; the client-side configuration does not match the proxy; the existing proxy is invalid. Refer to the stack trace for more detail. You can try to recover by starting a new proxy, restoring to default configuration, or refreshing the service. My code (interface): [ServiceContract(Namespace = "http://rivworks.com/Services/2010/04/19")] public interface ISync { [OperationContract] bool Execute(long ClientID); } My code (class): public class Sync : ISync { #region ISync Members bool ISync.Execute(long ClientID) { return model.Product(ClientID); } #endregion } My config (EDIT - posted entire serviceModel section): <system.serviceModel> <diagnostics performanceCounters="Default"> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <serviceHostingEnvironment aspNetCompatibilityEnabled="false" /> <behaviors> <endpointBehaviors> <behavior name="JsonpServiceBehavior"> <webHttp /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="SimpleServiceBehavior"> <serviceMetadata httpGetEnabled="True" policyVersion="Policy15"/> </behavior> <behavior name="RivWorks.Web.Service.ServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> </behaviors> <services> <service name="RivWorks.Web.Service.NegotiateService" behaviorConfiguration="SimpleServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="jsonpBinding" behaviorConfiguration="JsonpServiceBehavior" contract="RivWorks.Web.Service.NegotiateService" /> <!--<host> <baseAddresses> <add baseAddress="http://kab.rivworks.com/services"/> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="RivWorks.Web.Service.NegotiateService" />--> <endpoint address="mex" binding="mexHttpBinding" contract="RivWorks.Web.Service.NegotiateService" /> </service> <service name="RivWorks.Web.Service.Sync" behaviorConfiguration="RivWorks.Web.Service.ServiceBehavior"> <endpoint address="" binding="wsHttpBinding" contract="RivWorks.Web.Service.ISync" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <extensions> <bindingElementExtensions> <add name="jsonpMessageEncoding" type="RivWorks.Web.Service.JSONPBindingExtension, RivWorks.Web.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bindingElementExtensions> </extensions> <bindings> <customBinding> <binding name="jsonpBinding" > <jsonpMessageEncoding /> <httpTransport manualAddressing="true"/> </binding> </customBinding> </bindings> </system.serviceModel> 2 questions: What am I missing that causes this error? How can I increase the time out for the service? TIA!

    Read the article

  • Ant Copy Task: Failed to copy due to java.io.FileNotFoundException

    - by rfkrocktk
    I'm trying to compile a Flex application in Ant (no problems here, I can do it fine). When I try to publish the contents of the project to a Windows network drive (known as "Z:\" on my system), I get the following LAME exception thrown by Java/Ant: BUILD FAILED C:\workspace\bkeller\build.xml:42: Failed to copy C:\workspace\bkeller\web\assets\text\biography.html to Z:\web\bkeller\assets\text\biography.html due to java.io.FileNotFoundException Z:\web\bkeller\assets\text\biography.html (The system cannot find the file specified) Which kind of sucks. I can't find any way to get rid of this problem and it's pretty crucial to my project that I get this working. I know for sure that I have read/write/execute permissions on the network drive, I can create/edit/delete files on the drive just fine through Windows explorer. Drive Z is a network mount to virtualbox, allowing me to get access to my host OS, Ubuntu. I've double checked that it has write permissions. Any ideas?

    Read the article

  • Pylons FormEncode @validate decorator pass parameters into re-render action

    - by joelbw
    I am attempting to use the validate decorator in Pylons with FormEncode and I have encountered an issue. I am attempting to validate a form on a controller action that requires parameters, and if the validation fails, the parameters aren't passed back in when the form is re-rendered. Here's an example. def question_set(self, id): c.question_set = meta.Session.query(QuestionSet).filter_by(id=id).first() c.question_subjects = meta.Session.query(QuestionSubject).order_by(QuestionSubject.name).all() return render('/derived/admin/question_set.mako') This is the controller action that contains my form. The form will add questions to an existing question set, which is identified by id. My add question controller action looks like this: @validate(schema=QuestionForm(), form='question_set', post_only=True) def add_question(self): stuff... Now, if the validation fails FormEncode attempts to redisplay the question_set form, but it does not pass the id parameter back in, so the question set form will not render. Is it possible to pass the id back in with the @validate decorator, or do I need to use a different method to achieve what I am attempting to do?

    Read the article

  • Disable redirect in fb:request-form Send/Cancel button

    - by Colossal Paul
    Hi, How do I disable the redirect in Facebook's invite form? <fb:serverfbml style="width: 600px; height: 650px;"> <script type="text/fbml"> <fb:request-form action="index.php" method="POST" invite="true" type="MyApp" content="Please have a look. <fb:req-choice url='http://apps.facebook.com/myapp/' label='View Now!' />"> <div class="clearfix" style="padding-bottom: 10px;"> <fb:multi-friend-selector condensed="true" style="width: 600px;" /> </div> <fb:request-form-submit /> </fb:request-form> After selecting friends, you will see the final Send Invite dialog with your template. After you click send or cancel, how do i disable the redirect by just closing the the dialog? Thanks.

    Read the article

  • JavaEE in netbeans giving BUILD FAILED error upon deployment

    - by user312402
    When I try to run my Java EE program in Netbeans consisting of servlets (java pages), JSP's, beans(java pages) and HTML pages I get this error in the output: In-place deployment at C:\Users\Derek\Documents\NetBeansProjects\EJBProject\EJBProject-war\build\web Initializing... deploy?path=C:\Users\Derek\Documents\NetBeansProjects\EJBProject\EJBProject-war\build\web&name=EJBProject-war&force=true failed on Personal GlassFish v3 Domain C:\Users\Derek\Documents\NetBeansProjects\EJBProject\EJBProject-war\nbproject\build-impl.xml:611: The module has not been deployed. BUILD FAILED (total time: 1 second) And then in the command prompt when I run asant run in the appropriate directory, I get: C:\Users\Derek\Documents\NetBeansProjects\EJBProject\nbproject\build-impl.xml:19: Class org.apache.tools.ant.taskdefs.condition.Not doesn't support the nested "antversion" element. Do you know why this would be? Why won't netbeans deploy my application so I can run and test it?

    Read the article

  • Git fails when pushing commit to github

    - by Steve Melvin
    I cloned a git repo that I have hosted on github to my laptop. I was able to successfully push a couple of commits to github without problem. However, now I get the following error: Compressing objects: 100% (792/792), done. error: RPC failed; result=22, HTTP code = 411 Writing objects: 100% (1148/1148), 18.79 MiB | 13.81 MiB/s, done. Total 1148 (delta 356), reused 944 (delta 214) From here it just hangs and I finally have to ^C back to the terminal.

    Read the article

  • pass value from embedded function into conditional of page the embedded function is included on

    - by Brad
    I have a page that includes/embeds a file that contains a number of functions. One of the functions has a variable I want to pass back onto the page that the file is embedded on. <?php include('functions.php'); userInGroup(); if($user_in_group) { print 'user is in group'; } else { print 'user is not in group'; } ?> function within functions.php <?php function userInGroup() { foreach($group_access as $i => $group) { if($group_session == $group) { $user_in_group = TRUE; break; } else { $user_in_group == FALSE; } } }?> I am unsure as to how I can pass the value from the function userInGroup back to the page it runs the conditional if($user_in_group) on Any help is appreciated.

    Read the article

  • How to check for server side update in iPhone application Web service request

    - by Nirmal
    Hello All.... I have kind of Listing application for iPhone application, which always calling a web service to my php server and fetching the data and displaying it onto iPhone screen. Now, the thing to consider in this scenario is, my iPhone application everytime requesting on the server and fetching the data. But now my requirement is like I want to replace following set of actions with above one : - Everytime when my application is launched in iPhone, it should check for the new data at the server`. - And if server replies "true" then only my iPhone application will made a request to fetch the data. - In case of "false", my iPhone application will display the data which is already cached in local phone memory. Now to implement this scenario at server side (which has php, mysql), I am planning with the following solution : Table : tblNewerData id newDataFlag == ============ 1 true Trigger : tgrUpdateNewData Above trigger will update the tblNewerData - newDataFlag field on Insert case of my main table. And every time my iPhone app will request for tblNewerData-newDataFlag field, and if it found true then only it will create new request, and if it founds false then the cached version of data will be displayed. So, I want to know that, is it the correct way to do so ? or else any other smart option available ? Thanks in advance.

    Read the article

  • Making a concurrent AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request concurrently with an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • Making an AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request during an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • file_get_contents() returns "failed to open stream" when used with Facebook access_token flow

    - by TMC
    file_get_contents() is returning "failed to open stream" when I call it on a Facebook oAuth access_token URL. Warning: file_get_contents(https://graph.facebook.com/oauth/access_token?client_id=XXXXX&redirect_uri=http://mydomain.com/fb/callback3.php&client_secret=xxxx&code=YYYY) [function.file-get-contents]: failed to open stream: No error in E:\\htdocs\fb\callback3.php on line 5 (I have removed ClientID, clientSecret and the oAuth Code). If I try to manually hit the Facebook access_token URL that my code is attempting to fetch, I get an actual payload returned in the browser: access_token=XYZ&expires=6508 (where XYZ is the access token) So for some reason, there is a problem with the access_token URL specifically when used with file_get_contents(). At first, I thought it was a security issue with my webhoster, but I have verified with phpinfo() that allow url open is indeed allowed. I have also tried this code and verified it works: $foo = file_get_contents('http://google.com'); echo $foo Anyone have any ideas why file_get_contents() is failing with the Facebook access_token URL?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >