Search Results

Search found 4238 results on 170 pages for 'proxy pac'.

Page 69/170 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Code generating SOAP Web Service Proxy objects yourself - C#/.NET 3.5/T4

    - by tyndall
    Is there a framework or code already available that will give me more control over the code that gets generated based off my web references? I'm working at a new company. And the Web Services Proxies are all self contained in their own assembly. I would really rather generate this whole project. Every time they change something on the Services-side (Java) the WSDL references have to be dropped and re-added. (I can't figure out what those guys are doing on their end that messes with the WSDL bad enough that this needs to be done so much) Their are 10 of these references. I'd rather codegen the whole thing at compile time. Every time. What are my options?

    Read the article

  • Hibernate Lazy Loading Proxy Incompatable w/ Other Frameworks

    - by bowsie
    I've come across several instances where frameworks that take POJOs to do some work crap-out with proxied hibernate beans. For example if I xml annotate a bean for framework X and pass it to framework X it doesn't recognise the bean because it is passed the proxied object - which has no annotations for framework X. Is there a common solution to this? I'd prefer not to define the bean as eager loaded, or turn of lazy-loading anywhere in the application. Thoughts? Thanks.

    Read the article

  • Wicket @SpringBean doesn't create serializable proxy

    - by vinga
    @SpringBean PDLocalizerLogic loc; When using above I receive java.io.NotSerializableException. This is because loc is not serializable, but this shouldn't be problem because spring beans are a serializable proxies. On the page https://cwiki.apache.org/WICKET/spring.html#Spring-AnnotationbasedApproach is written: Using annotation-based approach, you should not worry about serialization/deserialization of the injected dependencies as this is handled automatically, the dependencies are represented by serializable proxies What am I doing wrong?

    Read the article

  • 2 IP are stored for a visitor : PROXY ?

    - by Tristan
    Hello, on my database i've decided to store IP of the visitors who answoers to polls. It's all working, but there is only 2 cases where not only 1 IP is stored, but there is 2 SAME ip for the same visitor MySQLL output (i replaced 2 numbers by XX) 10.188.XX.129, 10.188.XX.129 Here's the script to recieve the IP of the visitor : <?php function realip() { if (isset($_SERVER)) { if (isset($_SERVER["HTTP_X_FORWARDED_FOR"])) { $realip = $_SERVER["HTTP_X_FORWARDED_FOR"]; } elseif (isset($_SERVER["HTTP_CLIENT_IP"])) { $realip = $_SERVER["HTTP_CLIENT_IP"]; } else { $realip = $_SERVER["REMOTE_ADDR"]; } } else { if ( getenv( 'HTTP_X_FORWARDED_FOR' ) ) { $realip = getenv( 'HTTP_X_FORWARDED_FOR' ); } elseif ( getenv( 'HTTP_CLIENT_IP' ) ) { $realip = getenv( 'HTTP_CLIENT_IP' ); } else { $realip = getenv( 'REMOTE_ADDR' ); } } return $realip; } ? Thanks

    Read the article

  • Programmatically implementing an interface that combines some instances of the same interface in var

    - by namin
    What is the best way to implement an interface that combines some instances of the same interface in various specified ways? I need to do this for multiple interfaces and I want to minimize the boilerplate and still achieve good efficiency, because I need this for a critical production system. Here is a sketch of the problem. Abstractly, I have a generic combiner class which takes the instances and specify the various combinators: class Combiner<I> { I[] instances; <T> T combineSomeWay(InstanceMethod<I,T> method) { // ... method.call(instances[i]) ... combined in some way ... } // more combinators } Now, let's say I want to implement the following interface among many others: Interface Foo { String bar(int baz); } I want to end up with code like this: class FooCombiner implements Foo { Combiner<Foo> combiner; @Override public String bar(final int baz) { return combiner.combineSomeWay(new InstanceMethod<Foo, String> { @Override public call(Foo instance) { return instance.bar(baz); } }); } } Now, this can quickly get long and winded if the interfaces have lots of methods. I know I could use a dynamic proxy from the Java reflection API to implement such interfaces, but method access via reflection is hundred times slower. So what are the alternatives to boilerplate and reflection in this case?

    Read the article

  • WCF configuration and ISA Proxies

    - by Morten Louw Nielsen
    Hi, I have a setup with a .NET WCF Service hosted on IIS. The client apps are connecting to the service through a set of ISA proxy's. I don't know how many and don't know about their configuration etc. In the client apps I open a client to the service and make several calls via the same client. It works great in my office, but when I deploy at the customer (using the ISAs), after some calls, the connection breaks. In a successfull case, the client will maximum live a few seconds, but is that too much? I think there might be several proxyes. Maybe it's using load ballancing. pseudo code is something like this: WcfClient myClient = new WcfClient(); foreach (WorkItem Item in WorkItemsStack) myClient.ProcessItem(Item); myClient.Close(); I am thinking whether I have to do something like this foreach (WorkItem Item in WorkItemsStack) { WcfClient myClient = new WcfClient(); myClient.ProcessItem(Item); myClient.Close(); } Any one with experience with this field? Kind Regards, Morten, Denmark

    Read the article

  • Browser security when calling HTTP assets via a SWF on a HTTPS site

    - by Mark Ursino
    We have a site that runs on HTTPS and needs to pull in various JS assets to run a video player on the page. We get a browser security warning on this page because the JS files we are externally calling are being accessed via HTTP, not HTTPS. E.g. // HTTP reference on a HTTPS site <script src="http://the-cdn.tld/player.js"></script> Simply accessing this one JS assets via HTTP and not HTTPS will cause the browser security warning which we need to get rid of. The provider of the JS file does not support an HTTPS equivalent (like Google Analytics does). We would ideally love to just do the following, but the provider does not have this: // HTTPS reference on a HTTPS site <script src="https://the-cdn.tld/player.js"></script> One option we had was to just download a copy of the JS file and serve it on the HTTPS site, however we have concerns with this as it is not recommended by the provider and will not include updates from them. Assuming we cannot do that, we were thinking a possible other option would be to use a SWF file as a proxy. We were thinking that we could have one of our flash guys create a SWF that loads in the HTTP-served JS file to the page. We were wondering that if this SWF makes the request, would that prevent the browser from showing the security warning or not? I assumed that we would still see the warning since the SWF is still making the request through the browser, but I wanted to see what the hive mind thinks.

    Read the article

  • Multiple Ruby versions on one webserver?

    - by Legion
    The Ideal Using rvm, it would be awesome to be able to have multiple Rubies on one webserver, and through some sort of server configuration, be able to assign Ruby versions to different Rails/Sinatra/etc apps on a per-project basis. I am aware, from rvm's documentation, that Passenger only works with one Ruby at a time. :( The Compromise Failing that, it would be nice to at least be able to concoct a way to be able to assign projects to a Ruby 1.8 or a Ruby 1.9 interpreter. I've read that using Nginx as a reverse proxy allows running Apache and Nginx on the same box. Would it then be possible to have Apache+Passenger using one Ruby, and Nginx+Passenger using a different one? Maybe use something other than Passenger with Nginx? Am I Barking Up the Wrong Tree? Am I missing a good solution to this issue? Am I walking into a nightmare configuration situation? Is what I want even viable, or is it necessary to run another box to run a separate Ruby version?

    Read the article

  • Streaming local file from PHP while it's been written to by a CURL process

    - by Fahim
    I am creating a simple Proxy server for my website. Why I am not using mod_proxy and mod_cache is a different discussion. Here's the code: shell_exec("nohup curl --create-dirs -o {$write_path} {$source_url} > /dev/null 2> /dev/null & echo $!"); sleep(1); $read_speed = 65.5; # 65.5 kb/s download rate $handle = fopen($write_path, "rb"); $content_type = select_meta_item($headers, 'Content-Type'); $file_size = select_meta_item($headers, 'Content-Length'); send_headers($content_type, $file_size); flush(); while (!feof($handle)) { echo fread($handle, round($read_speed * 1024)); flush(); sleep(1); } fclose($handle); Streaming an MP3 doesn't work using this method. Plays in Chrome, but not in Firefox. Initially I'll be using this to stream MP3 files through Long Tail's JW Player. If it all works out, I'll also be using this to send ZIP files.

    Read the article

  • Android: Unable to make httprequest behind firewall

    - by Yang
    The standard getUrlContent works welll when there is no firewall. But I got exceptions when I try to do it behind a firewall. I've tried to set "http proxy server" in AVD manager, but it didn't work. Any idea how to correctly set it up? protected static synchronized String getUrlContent(String url) throws ApiException { if(url.equals("try")){ return "thanks"; } if (sUserAgent == null) { throw new ApiException("User-Agent string must be prepared"); } // Create client and set our specific user-agent string HttpClient client = new DefaultHttpClient(); HttpGet request = new HttpGet(url); request.setHeader("User-Agent", sUserAgent); try { HttpResponse response = client.execute(request); // Check if server response is valid StatusLine status = response.getStatusLine(); if (status.getStatusCode() != HTTP_STATUS_OK) { throw new ApiException("Invalid response from server: " + status.toString()); } // Pull content stream from response HttpEntity entity = response.getEntity(); InputStream inputStream = entity.getContent(); ByteArrayOutputStream content = new ByteArrayOutputStream(); // Read response into a buffered stream int readBytes = 0; while ((readBytes = inputStream.read(sBuffer)) != -1) { content.write(sBuffer, 0, readBytes); } // Return result from buffered stream return new String(content.toByteArray()); } catch (IOException e) { throw new ApiException("Problem communicating with API", e); } }

    Read the article

  • Outgoing UDP sniffer in python?

    - by twneale
    I want to figure out whether my computer is somehow causing a UDP flood that is originating from my network. So that's my underlying problem, and what follows is simply my non-network-person attempt to hypothesize a solution using python. I'm extrapolating from recipe 13.1 ("Passing Messages with Socket Datagrams") from the python cookbook (also here). Would it possible/sensible/not insane to try somehow writing an outgoing UDP proxy in python, so that outgoing packets could be logged before being sent on their merry way? If so, how would one go about it? Based on my quick research, perhaps I could start a server process listening on suspect UDP ports and log anything that gets sent, then forward it on, such as: import socket s =socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.bind(("", MYPORT)) while True: packet = dict(zip('data', 'addr'), s.recvfrom(1,024)) log.info("Recieved {data} from {addr}.".format(**packet)) But what about doing this for a large number of ports simultaneously? Impractical? Are there drawbacks or other reasons not to bother with this? Is there a better way to solve this problem (please be gentle).

    Read the article

  • How to find the first declaring method for a reference method

    - by Oliver Gierke
    Suppose you have a generic interface and an implementation: public interface MyInterface<T> { void foo(T param); } public class MyImplementation<T> implements MyInterface<T> { void foo(T param) { } } These two types are frework types. In the next step I want allow users to extend that interface as well as redeclare foo(T param) to maybe equip it with further annotations. public interface MyExtendedInterface extends MyInterface<Bar> { @Override void foo(Bar param); // Further declared methods } I create an AOP proxy for the extended interface and intercept especially the calls to furtherly declared methods. As foo(…) is no redeclared in MyExtendedInterface I cannot execute it by simply invoking MethodInvocation.proceed() as the instance of MyImplementation only implements MyInterface.foo(…) and not MyExtendedInterface.foo(…). So is there a way to get access to the method that declared a method initially? Regarding this example is there a way to find out that foo(Bar param) was declared in MyInterface originally and get access to the accoriding Method instance? I already tried to scan base class methods to match by name and parameter types but that doesn't work out as generics pop in and MyImplementation.getMethod("foo", Bar.class) obviously throws a NoSuchMethodException. I already know that MyExtendedInterface types MyInterface to Bar. So If I could create some kind of "typed view" on MyImplementation my math algorithm could work out actually.

    Read the article

  • Building "isolated" and "automatically updated" caches (java.util.List) in Java.

    - by Aidos
    Hi Guys, I am trying to write a framework which contains a lot of short-lived caches created from a long-living cache. These short-lived caches need to be able to return their entier contents, which is a clone from the original long-living cache. Effectively what I am trying to build is a level of transaction isolation for the short-lived caches. The user should be able to modify the contents of the short-lived cache, but changes to the long-living cache should not be propogated through (there is also a case where the changes should be pushed through, depending on the Cache type). I will do my best to try and explain: master-cache contains: [A,B,C,D,E,F] temporary-cache created with state [A,B,C,D,E,F] 1) temporary-cache adds item G: [A,B,C,D,E,F] 2) temporary-cache removes item B: [A,C,D,E,F] master-cache contains: [A,B,C,D,E,F] 3) master-cache adds items [X,Y,Z]: [A,B,C,D,E,F,X,Y,Z] temporary-cache contains: [A,C,D,E,F] Things get even harder when the values in the items can change and shouldn't always be updated (so I can't even share the underlying object instances, I need to use clones). I have implemented the simple approach of just creating a new instance of the List using the standard Collection constructor on ArrayList, however when you get out to about 200,000 items the system just runs out of memory. I know the value of 200,000 is excessive to iterate, but I am trying to stress my code a bit. I had thought that it might be able to somehow "proxy" the list, so the temporary-cache uses the master-cache, and stores all of it's changes (effectively a Memento for the change), however that quickly becomes a nightmare when you want to iterate the temporary-cache, or retrieve an item at a specific index. Also given that I want some modifications to the contents of the list to come through (depending on the type of the temporary-cache, whether it is "auto-update" or not) and I get completly out of my depth. Any pointers to techniques or data-structures or just general concepts to try and research will be greatly appreciated. Cheers, Aidos

    Read the article

  • Setting WCF service for multiple client calls

    - by user348255
    Hi all, I have made a WCF service which is defined like this: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] binding is done using netTcpBinding. We support 50+ clients that call the server from time to time. Each client opens a channel using channelfactory once it is loaded and uses that channel for all calls (creates the channel and proxy only once). we have built a small load tester that imitates the client by calling the server by 50 different threads at once (using 50 different channels). when we run this tester, after the 10th client tries to connect, all other client fail connecting. We have set throttling to 100. My questions are: 1. is it correct for each client to create a channel and use it through the client life time? or, do i need to use a using statement for each call to the server (create and distroy a new channel for each call). 2. does the service have a limit of channel connections to it? other then throttling? thanks alot, Guy.

    Read the article

  • Properly Establishing an ApplicationEndpoint in UCMA 3.0

    - by user570720
    I've been struggling with getting an application endpoint working on UCMA 3.0. I am trying to run an application on a server separate from the Lync server which uses a registered ApplicationEndpoint to monitor presence and act as a bot which can send other users messages. I used to have my code working with a UserEndpoint (which was fine for monitoring presence), but did not have the capabilities to send IMs to other Lync users. After searching the web, I'm finally at the point where I'm getting this error when running my code: System.ArgumentException was unhandled Message=An ApplicationEndpoint can be registered only if proxy and Multual Tls have been specified. Source=Microsoft.Rtc.Collaboration StackTrace: at Microsoft.Rtc.Collaboration.ApplicationEndpoint..ctor(CollaborationPlatform platform, ApplicationEndpointSettings settings) at Waldo.endpointHelper.CreateApplicationEndpoint(ApplicationEndpointSettings applicationEndpointSettings) in C:\Users\l1m5\Desktop\waldoproject\trunk\WaldoSoln\waldoGrabPresence\endpointHelper.cs:line 117 at Waldo.endpointHelper.CreateEstablishedApplicationEndpoint(String endpointFriendlyName) in C:\Users\l1m5\Desktop\waldoproject\trunk\WaldoSoln\waldoGrabPresence\endpointHelper.cs:line 228 at Waldo.waldoGrabPresence.Run() in C:\Users\l1m5\Desktop\waldoproject\trunk\WaldoSoln\waldoGrabPresence\waldoGrabPresence.cs:line 60 at Waldo.waldoGrabPresence.Main(String[] args) in C:\Users\l1m5\Desktop\waldoproject\trunk\WaldoSoln\waldoGrabPresence\waldoGrabPresence.cs:line 42 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: After some searching, I followed the instructions here: http://blogs.claritycon.com/blogs/michael_greenlee/archive/2009/03/21/installing-a-certificate-for-ucma-v2-0-applications.aspx to import a certificate onto the server that I'm trying to run the application on, but to no avail. So at this point, I think that there must be something wrong with how I'm setting up the ApplicationEndpointSettings, CollaberationPlatform or ApplicationEndpoint objects. Here's how I'm doing it: ApplicationEndpointSettings settings = new ApplicationEndpointSettings(_ownerURIPrompt, _serverFQDNPrompt, _trustedPortPrompt); ServerPlatformSettings settings = new ServerPlatformSettings(null, _serverFQDNPrompt, _trustedPortPrompt, _trustedApplicationGRUU); _collabPlatform = new CollaborationPlatform(settings); _applicationEndpoint = new ApplicationEndpoint(_collabPlatform, applicationEndpointSettings); Does anyone see any problems with what I'm doing? Or, better yet, does anyone know of a blog that walks you through establishing an application endpoint in the situation I'm in? I work really well with tutorials or samples, but have not found one that seems to accomplish what I'm trying to do. Thanks for the help!

    Read the article

  • squid and ftp connections

    - by Kstro21
    i have a squid proxy server for both, http and ftp connections, i'm trying to use filezilla to open a ftp, but it always fail with an error saying: Status: Connection with proxy established, performing handshake... Response: Proxy reply: HTTP/1.0 403 Forbidden Error: Proxy handshake failed: ECONNRESET - Connection reset by peer Error: Connection timed out Error: Failed to retrieve directory listing i sniff the traffic, and, filezilla is trying to connect to a different port and the proxy denied it look, this is a portion of the sniff result CONNECT 201.150.36.227:61179 HTTP/1.1 Host: 201.150.36.227:61179 User-Agent: FileZilla everytime is a different port, so, no way i can allow it in the squid, also, i set the filezilla to use a active connection, same result, passive connection, same result again, so, i'm out of bullets, and i need your help, maybe a setting in the filezilla or in the squid can do the job, so, give a hand here this is the full log of the filezilla Status: Connecting to uhma.mx through proxy Status: Connecting to 172.19.216.13:3128... Status: Connection with proxy established, performing handshake... Response: Proxy reply: HTTP/1.0 200 Connection established Status: Connection established, waiting for welcome message... Response: 220 ProFTPD 1.3.3a Server (a3 FTP CUATRO) [201.150.36.227] Command: USER uhmamx Response: 331 Password required for uhmamx Command: PASS ******* Response: 230 User uhmamx logged in Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is the current directory Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (201,150,36,227,238,251). Command: MLSD Status: Connecting to 172.19.216.13:3128... Status: Connection with proxy established, performing handshake... Response: Proxy reply: HTTP/1.0 403 Forbidden Error: Proxy handshake failed: ECONNRESET - Connection reset by peer Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • nginx: Rewrite PHP does not work

    - by Ton Hoekstra
    I've a Suffix Proxy installed and I'm using the following rewrite with wildcard subdomain DNS on: location / { if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; } } My suffix proxy has the following URL format: (subdomain and/or domain + domain extension to proxy).proxy.org/(request-uri to proxy) I've this php code in my index.php: if(preg_match('#([\w\.-]+)\.example\.com(.+)#', $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'], $match)) { header('Location: http://example.com/browse.php?u=http://'.$match[1].$match[2]); die; } But when requested a page with a .php extension I'll get a 404 not found error: http://www.php.net.proxy.org/docs.php - HTTP/1.1 404 Not Found http://www.utexas.edu.proxy.org/learn/php/ex3.php - HTTP/1.1 404 Not Found But everything else is working (also index.php is working): http://php.net.proxy.org/index.php - HTTP/1.1 200 OK http://www.php-scripts.com.proxy.org/php_diary/example2.php3 - HTTP/1.1 200 OK http://www.utexas.edu.proxy.org/learn/php/ex3.phps - HTTP/1.1 200 OK http://www.w3schools.com.proxy.org/html/default.asp - HTTP/1.1 200 OK Somebody has an answer? I don't know why it's not working, on apache it's working fine. Thanks in advance. I've removed the location and now it's working perfectly: if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; }

    Read the article

  • WinMo > ASMX WebException - how to get details?

    - by eidylon
    Okay, we've got an application which consists of a website hosting several ASMX webservices, and a handheld application running on WinMo 6.1 which calls the webservices. Been developing in the office, everything works perfect. Now we've gone to install it at the client's and we got all the servers set up and the handhelds installed. However the handhelds are now no longer able to connect to the webservice. I added in extra code in my error handler to specifically trap WebException exceptions and handle them differently in the logging to put out extra information (.Status and .Response). I am getting out the status, which is returning a [7], or ProtocolError. However when I try to read out the ResponseStream (using WebException.Response.GetResponseStream), it is returning a stream with CanRead set to False, and I thus am unable to get any further details of what is going wrong. So I guess there are two things I am asking for help with... a) Any help with trying to get more information out of the WebException? b) What could be causing a ProtocolError exception? Things get extra complicated by the fact that the client has a full-blown log-in-enabled proxy setup going on-site. This was stopping all access to the website initially, even from a browser. So we entered in the login details in the network connection for HTTP on the WinMo device. Now it can get to websites fine. In fact, I can even pull up the webservice fine and call the methods from the browser (PocketIE). So I know the device is able to see the webservices okay via HTTP. But when trying to call them from the .NET app, it throws ProtocolError [7]. Here is my code which is logging the exception and failing to read out the Response from the WebException. Public Sub LogEx(ByVal ex As Exception) Try Dim fn As String = Path.Combine(ini.CorePath, "error.log") Dim t = File.AppendText(fn) t.AutoFlush = True t.WriteLine(<s>===== <%= Format(GetDateTime(), "MM/dd/yyyy HH:mm:ss") %> =====<%= vbCrLf %><%= ex.Message %></s>.Value) t.WriteLine() t.WriteLine(ex.ToString) t.WriteLine() If TypeOf ex Is WebException Then With CType(ex, WebException) t.WriteLine("STATUS: " & .Status.ToString & " (" & Val(.Status) & ")") t.WriteLine("RESPONSE:" & vbCrLf & StreamToString(.Response.GetResponseStream())) End With End If t.WriteLine("=".Repeat(50)) t.WriteLine() t.Close() Catch ix As Exception : Alert(ix) : End Try End Sub Private Function StreamToString(ByVal s As IO.Stream) As String If s Is Nothing Then Return "No response found." // THIS IS THE CASE BEING EXECUTED If Not s.CanRead Then Return "Unreadable response found." Dim rv As String = String.Empty, bytes As Long, buffer(4096) As Byte Using mem As New MemoryStream() Do While True bytes = s.Read(buffer, 0, buffer.Length) mem.Write(buffer, 0, bytes) If bytes = 0 Then Exit Do Loop mem.Position = 0 ReDim buffer(mem.Length) mem.Read(buffer, 0, mem.Length) mem.Seek(0, SeekOrigin.Begin) rv = New StreamReader(mem).ReadToEnd() mem.Close() End Using Return rv.NullOf("Empty response found.") End Function Thanks in advance!

    Read the article

  • Opening the Internet Settings Dialog and using Windows Default Network Settings via Code

    - by Rick Strahl
    Ran into a question from a client the other day that asked how to deal with Internet Connection settings for running  HTTP requests. In this case this is an old FoxPro app and it's using WinInet to handle the actual HTTP connection. Another client asked a similar question about using the IE Web Browser control and configuring connection properties. Regardless of platform or tools used to do HTTP connections, you can probably configure custom connection and proxy settings in your application to configure http connection settings manually. However, this is a repetitive process for each application requires you to track system information in your application which is undesirable. Often it's much easier to rely on the system wide proxy settings that Windows provides via the Internet Settings dialog. The dialog is a Control Panel applet (inetcpl.cpl) and is the same dialog that you see when you pop up Internet Explorer's Options dialog: This dialog controls the Windows connection properties that determine how the Windows HTTP stack connects to the Internet and how Proxy's are used if configured. Depending on how the HTTP client is configured - it can typically inherit and use these global settings. Loading the Settings Dialog Programmatically The settings dialog is a Control Panel applet with the name of: inetcpl.cpl and you can use any Shell execution mechanism (Run dialog, ShellExecute API, Process.Start() in .NET etc.) to invoke the dialog. Changes made there are immediately reflected in any applications that use the default connection settings. In .NET you can simply do this to bring up the Internet Settings dialog with the Connection tab enabled: Process.Start("inetcpl.cpl",",4"); In FoxPro you can simply use the RUN command to execute inetcpl.cpl: lcCmd = "inetcpl.cpl ,4" RUN &lcCmd Using the Default Connection/Proxy Settings When using WinInet you specify the Http connect type in the call to InternetOpen() like this (FoxPro code here): hInetConnection=; InternetOpen(THIS.cUserAgent,0,; THIS.chttpproxyname,THIS.chttpproxybypass,0) The second parameter of 0 specifies that the default system proxy settings should be used and it uses the settings from the Internet Settings Connections tab. Other connection options for HTTP connections include 1 - direct (no proxies and ignore system settings), 3 - explicit Proxy specification. In most situations a connection mode setting of 0 should work. In .NET HTTP connections by default are direct connections and so you need to explicitly specify a default proxy or proxy configuration to use. The easiest way to do this is on the application level in the config file: <configuration> <system.net> <defaultProxy> <proxy bypassonlocal="False" autoDetect="True" usesystemdefault="True" /> </defaultProxy> </system.net> </configuration> You can do the same sort of thing in code specifying the proxy explicitly and using System.Net.WebProxy.GetDefaultProxy(). So when making HTTP calls to Web Services or using the HttpWebRequest class you can set the proxy with: StoreService.Proxy = WebProxy.GetDefaultProxy(); All of this is pretty easy to deal with and in my opinion is a way better choice to managing connection settings than having to track this stuff in your own application. Plus if you use default settings, most of the time it's highly likely that the connection settings are already properly configured making further configuration rare.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Windows  HTTP  .NET  FoxPro   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • In developing a soap client proxy, which return structure is easier to use and more sensible?

    - by cori
    I'm writing (in PHP) a client/proxy for a SOAP web service. The return types are consistently wrapped in response objects that contain the return values. In many cases this make a lot of sense - for instance when multiple values are being returned: GetDetailsResponse Object ( Results Object ( [TotalResults] => 10 [NextPage] => 2 ) [Details] => Array ( [0] => Detail Object ( [Id] => 1 ) ) ) But some of the methods return a single scalar value or a single object or array wrapped in a response object: GetThingummyIdResponse Object ( [ThingummyId] => 42 ) In some cases these objects might be pretty deep, so getting at properties within requires drilling down several layers: $response->Details->Detail[0]->Contents->Item[5]->Id And if I unwrap them before passing them back I can strip out a layer from consumers' code. I know I'm probably being a little bit of an Architecture Astronaut here, but the latter style really bug me, so I've been working through my code to have my proxy methods just return the scalar value to the client code where there's no absolute need for a wrapper object. My question is, am I actually making things more difficult for the consumers of my code? Would I be better off just leaving the return values wrapped in response objects so that everything is consistent, or is removing unneccessary layers of indirection/abstraction worthwhile?

    Read the article

  • Best way to re-use the same django models and admin for multiple apps

    - by kepioo
    Given a reference app ( called guide), how can I create additional apps that will reuse the same model/admin/views than guide - the motivation behind is to be able to individually control each subapp. guide guideApp1 exact same models/admin/views than guide guideApp2 exact same models/admin/views than guide in the Admin site, I should have : 1 section for guideApp1 with all the tables defined in guide, that applies to guideApp1 1 section for guideApp12 with all the tables defined in guide, that applies to guideApp2

    Read the article

  • facebook connect: thumbnail images broken up in FB.Connect.streamPublish pop-up prompt, and on wall

    - by Hoff
    hi there! I'm using facebook connect so that users can publish comments they are leaving on my site on their facebook wall as well. It works as intended, except that in the confirmation pop up, the thumbnail image i provide is broken. Looking at the source, I can see that facebook prepended my image url like this: from: http://www.mysite.com/path/to/my/image.jpg to: http://platform.ak.fbcdn.net/www/app_full_proxy.php?app=303377111175&v=1&size=z&cksum=41a391c9f3a6f3dde2ede9892763c943&src=http%3A%2F%2Fwww.mysite.com%2Fpath%2Fto%2Fmy%2image.jpg The image on the facebook user's wall has the same prepended url, and is also broken for a couple of minutes, after which it's showing up correctly. But obviously, having a broken image in the confirmation window and on your wall for a couple of minutes is not a good experience... Has anybody experienced the same / knows how to work around this issue? Thanks a lot in advance! Martin PS: here's the part of the js call, if it's of any use... attachment = { 'media': [{ 'type': 'image', 'src': 'http://www.mysite.com/path/to/my/image.jpg', 'href': 'http://www.mysite.com/the/current/page' }] }; FB.Connect.streamPublish(user_message, attachment, action_links, target_id, user_message_prompt, fbcallback, false, actor_id)

    Read the article

  • Nginx A/B testing

    - by Alex
    Hey, I'm trying to do A/B testing and I'm using Nginx fo this purpose. My Nginx config file looks like this: events { worker_connections 1024; } error_log /usr/local/experiments/apps/reddit_test/error.log notice; http { rewrite_log on; server { listen 8081; access_log /usr/local/experiments/apps/reddit_test/access.log combined; location / { if ($remote_addr ~ "[02468]$") { rewrite ^(.+)$ /experiment$1 last; } rewrite ^(.+)$ /main$1 last; } location /main { internal; proxy_pass http://www.reddit.com/r/lisp; } location /experiment { internal; proxy_pass http://www.reddit.com/r/haskell; } } } This is kind of working, but css and js files woon't load. Can anyone tell me what's wrong with this config file or what would be the right way to do it? Thanks, Alex

    Read the article

  • IIRF not working with AsP.NET PostBacks?

    - by MNT
    Hi, I have the following scenario Web server A: public on the internet, IIRF (current version) installed Web server B: on the intranet, visible to A, my APS.NET web app is installed on, name is pgdbtest3 I configure IIRF so that any request targetting directory /MMS/ on server A is redirected to the corresponding one http://pgdbtest3/MMS/. The ini file look like: StatusUrl /iirfStatus RemoteOk RedirectRule ^/MMS$ /MMS/ [I] ProxyPass ^/MMS/(.*)$ http://pgdbtest3/MMS/$1 [I] It is working fine except that any post back causes an error (404 is returned). I have tried many solutions including the removal of the action attribute from the form but with no luck. Please help!

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >