Search Results

Search found 24387 results on 976 pages for 'ssh client'.

Page 510/976 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • Tunneling HTTPS traffic via a PUTTY/SSL tunnel with SOCKS

    - by ripper234
    I have configured a SOCKS ssh tunnel to a remote proxy, and set my Firefox to use localhost:<port> as a SOCKS proxy. My intention is to tunnel outgoing HTTP/S connections from my machine via a specific 3rd party server I own (on AWS). In my testing, HTTP UTLs are forwarded properly (e.g. when I access http://jsonip.com/ from my computer I do get the server's IP) However, whenever I try to reach an HTTPS address, I get this error: The proxy server is refusing connections How do I debug/fix it? My PUTTY tunnel config is simply (some random source port number + dynamic checked): P.S. I'm aware I might need to manually accept SSL certificates. The reason I'm doing this is to resolve problems using gmail as an outbound SMTP service.

    Read the article

  • How to force rsync to use destination directory as root

    - by thepurplepixel
    I have a simple script to one-way-sync files/folders within a directory: #!/bin/bash HOST='<hostname>' USER='<username>' DIR='/downloads/' SOURCE='/srv/torrents' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats --progress -i $SOURCE $HOST:$DIR find $SOURCE -type d -empty -prune -exec rmdir -p \{\} \; However, when this rsync operation runs, it creates a folder, torrents in /downloads on the destination machine. How can I force rsync to put all folders & files from /srv/torrents (remote) into /downloads/ (local) instead of creating /downloads/torrents as a separate directory?

    Read the article

  • Download dynamic file with GWT

    - by Maksim
    I have a GWT page where user enter data (start date, end date, etc.), then this data goes to the server via RPC call. On the server I want to generate Excel report with POI and let user save that file on their local machine. This is my test code to stream file back to the client but for some reason I think it does not know how to stream file to the client when I'm using RPC: public class ReportsServiceImpl extends RemoteServiceServlet implements ReportsService { public String myMethod(String s) { File f = new File("/excelTestFile.xls"); String filename = f.getName(); int length = 0; try { HttpServletResponse resp = getThreadLocalResponse(); ServletOutputStream op = resp.getOutputStream(); ServletContext context = getServletConfig().getServletContext(); resp.setContentType("application/octet-stream"); resp.setContentLength((int) f.length()); resp.setHeader("Content-Disposition", "attachment; filename*=\"utf-8''" + filename + ""); byte[] bbuf = new byte[1024]; DataInputStream in = new DataInputStream(new FileInputStream(f)); while ((in != null) && ((length = in.read(bbuf)) != -1)) { op.write(bbuf, 0, length); } in.close(); op.flush(); op.close(); } catch (Exception ex) { ex.printStackTrace(); } return "Server says: " + filename; } } I've red somewhere on internet that you can't do file stream with RPC and I have to use Servlet for that. Is there any example of how to use Servlet and how to call that servlet from ReportsServiceImpl. Do I really need to make a servlet or it is possible to stream it back with my RPC?

    Read the article

  • Have Ubuntu 9.10 desktop, just got Macbook Pro. Share over Samba, NFS, other?

    - by miamisoftware
    Hi everyone. As the title says, I have and love my Ubuntu 9.10 desktop (use it for programming). Just got a Macbook Pro (Snow Leopard) and stuff like Documents, etc, trying to figure out easiest way to share my Ubuntu desktop with my Macbook Pro. Should I use Samba or NFS and is it easy to configure one (or something else) for only in network access (192.168.1.x). It took me about 2 days to find/setup Macfuse and Macfusion for sshfs to the Fedora web server and I'm hoping there's something much easier for this in network access. But if it requires or is suggested I go ssh, I can do that. Are there any security problems with either Samba or NFS - don't know much about AFP-Apple protocol so I've not brought it up. Thanks in advance.

    Read the article

  • Successful su for user by root in /var/log/auth.log

    - by grs
    I have this sorts of entries in my /var/log/auth.log: Apr 3 12:32:23 machine_name su[1521]: Successful su for user1 by root Apr 3 12:32:23 machine_name su[1654]: Successful su for user2 by root Apr 3 12:32:24 machine_name su[1772]: Successful su for user3 by root Situation: All users are real accounts in /etc/passwd; None of the users has its own crontab; All of those users are logged in the machine some time ago via SSH or No Machine - time varies from few minutes to few hours; no cron jobs are scheduled to run at that time, anacron is removed; I can see similar entries for other days and other times. The common part is the users are logged in when it appears. It does not appear during login, but some time afterwards. This machine has similar setup with few others but it is the only one where I see these entries. What causes them? Thanks

    Read the article

  • Address family not supported by protocol exception

    - by srg
    I'm trying to send a couple of values from an android application to a web service which I've setup. I'm using Http Post to send them but when I run the application I get the error- request time failed java.net.SocketException: Address family not supported by protocol. I get this while debugging with both the emulator as well as a device connected by wifi. I've already added the internet permission using: <uses-permission android:name="android.permission.INTERNET" /> This is the code i'm using to send the values void insertData(String name, String number) throws Exception { String url = "http://192.168.0.12:8000/testapp/default/call/run/insertdbdata/"; HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost(url); try { List<NameValuePair> params = new ArrayList<NameValuePair>(2); params.add(new BasicNameValuePair("a", name)); params.add(new BasicNameValuePair("b", number)); post.setEntity(new UrlEncodedFormEntity(params)); HttpResponse response = client.execute(post); }catch(Exception e){ e.printStackTrace(); } Also I know that my web service work fine because when I send the values from an html page it works fine - <form name="form1" action="http://192.168.0.12:8000/testapp/default/call/run/insertdbdata/" method="post"> <input type="text" name="a"/> <input type="text" name="b"/> <input type="submit"/> I've seen questions of similar problems but haven't really found a solution. Thanks

    Read the article

  • Remote X-windows between new RHEL5 and old Solaris 8

    - by joshxdr
    I have a very small lab network with three boxes: a modern x86-based RHEL3 box, an x86-based RHEL5 box, and a 1998-vintage SPARC Ultra5 with Solaris 8. I can use ssh -X to run a program on the RHEL5 box and view the windows on the RHEL3 box. I believe this uses xauth and magic cookies?? I have followed the X-Windows HOWTO to set up xauth on the Solaris box, but so far no dice. I would like to be able to use the X-windows server on the RHEL3 box with a client program on the Solaris box (program running on Solaris host, windows appearing at Linux host). Is there a trick to this, or have I made a mistake following the instructions for setting up xauth and magic cookie?

    Read the article

  • Understanding connection tracking in iptables

    - by Matt
    I'm after some clarification of the state/connection tracking in iptables. What is the difference between these rules? iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT Is connection tracking turned on when a packet is first matched containing -m state --state BLA , or is connection tracking always on? Can/Should connection state be used for fast matching like below? e.g. suppose this is some sort of router/firewall (no nat). # Default DROP policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Drop invalid iptables -A FORWARD -m state --state INVALID -j DROP # Accept established,related connections iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow ssh through, track connection iptables -A FORWARD -p tcp --syn --dport 22 -m state --state NEW -j ACCEPT

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • ASP.NET 2.0 + Firefox/Safari - UI Issues?

    - by Tejaswi Yerukalapudi
    I'm developing on a system that was originally developed five years ago. I don't have access to the complete source code of the system, but it is completely driven by XML and runs on ASP.NET 2.0. This was originally written for IE6, but since Microsoft has officially decided to dump it, we moved to IE7. Some javascript is added on the client side, but nothing that changes the UI has been done. (We had to integrate a credit card reader into the system) This code is accessed primarily on tablet PCs running windows, but I'd like to persuade my company to use the iPad. [The tablet somehow costs around 3k$. I think selling a 3000$ device to a client when you have the iPad for 500$ is ridiculous.] Now, my problem is if we open it in any other browser (Tested it on safari / firefox), the UI is completely messed up with elements completely out of place. Doesn't ASP.NET generate HTML that runs on any browser? My second question is if there are any credit card readers available in the market that integrate with the iPad. I don't really care about the software part as it's taken care by our company, I just need it to read the card details and post it to the server. Thanks, Teja.

    Read the article

  • OS X server VPN local ip

    - by gbrandt
    Hi all, I have 10.6.2 server on the internet. I want to vpn into it to get access. I start VPN and it gives me an address in the range I have set 192.168.2.100-192.168.2.105. However the server itself does not have a local ip of 192.168.2.x so I cannot ping it or ssh into it or anything. The machine VPNing gets an ifconfig entry that looks like this: ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1280 inet 192.168.2.100 --> 70.72.xxx.xxx netmask 0xffffff00 Where I think it should get: ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1280 inet 192.168.2.100 --> 192.168.2.1 netmask 0xffffff00 I can't find anywhere to set the local vpn IP address. And I can't find a pptpd.conf file either. Any help is appreciated.

    Read the article

  • Why do I get a Illegal Access Error when running my Android tests?

    - by Janusz
    I get the following stack trace when running my Android tests on the Emulator: java.lang.NoClassDefFoundError: client.HttpHelper at client.Helper.<init>(Helper.java:14) at test.Tests.setUp(Tests.java:15) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:164) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:151) at android.test.InstrumentationTestRunner.onStart(InstrumentationTestRunner.java:425) at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:1520) Caused by: java.lang.IllegalAccessError: cross-loader access from pre-verified class at dalvik.system.DexFile.defineClass(Native Method) at dalvik.system.DexFile.loadClass(DexFile.java:193) at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:203) at java.lang.ClassLoader.loadClass(ClassLoader.java:573) at java.lang.ClassLoader.loadClass(ClassLoader.java:532) ... 11 more I run my tests from an extra project. And it seems there are some problems with loading the classes from the other project. I have run the tests before but now they are failing. The project under tests runs without problems. Line 14 of the Helper Class is: this.httpHelper = new HttpHelper(userProfile); I start a HttpHelper class that is responsible for executing httpqueries. I think somehow this helper class is not available anymore, but I have no clue why.

    Read the article

  • How to specify allowed exceptions in WCF's configuration file?

    - by tucaz
    Hello! I´m building a set of WCF services for internal use through all our applications. For exception handling I created a default fault class so I can return treated message to the caller if its the case or a generic one when I have no clue what happened. Fault contract: [DataContract(Name = "DefaultFault", Namespace = "http://fnac.com.br/api/2010/03")] public class DefaultFault { public DefaultFault(DefaultFaultItem[] items) { if (items == null || items.Length== 0) { throw new ArgumentNullException("items"); } StringBuilder sbItems = new StringBuilder(); for (int i = 0; i Specifying that my method can throw this exception so the consuming client will be aware of it: [OperationContract(Name = "PlaceOrder")] [FaultContract(typeof(DefaultFault))] [WebInvoke(UriTemplate = "/orders", BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Json, RequestFormat = WebMessageFormat.Json, Method = "POST")] string PlaceOrder(Order newOrder); Most of time we will use just .NET to .NET communication with usual binds and everything works fine since we are talking the same language. However, as you can see in the service contract declaration I have a WebInvoke attribute (and a webHttp binding) in order to be able to also talk JSON since one of our apps will be built for iPhone and this guy will talk JSON. My problem is that whenever I throw a FaultException and have includeExceptionDetails="false" in the config file the calling client will get a generic HTTP error instead of my custom message. I understand that this is the correct behavior when includeExceptionDetails is turned off, but I think I saw some configuration a long time ago to allow some exceptions/faults to pass through the service boundaries. Is there such thing like this? If not, what do u suggest for my case? Thanks a LOT!

    Read the article

  • jCarousel - achieving an active state AND wrap:circular

    - by swisstony
    Hey folks A while back I implemented the jCarousel image solution for a client that required a numbered active state. After a bit of googling a found the answer but noticed that the preferred circular option would not work. What would happen is that once the carousel had cycled through all its (5) images, upon the return to the first, the active state would be lost, because, according to jcarousel it was actually the 6th (the index just keeps on incrementing). I just went ahead and instead used wrap:'both' which at least had a correctly functioning active state. However now the client says they dont like this effect and simply want the animation to return to position 1 after the final image. This means I need to get'wrap: 'both' working somehow. Below is my current code. Can someone please solve this one, as its a little above my head! function highlight(carousel, obejctli,liindex,listate){ jQuery('.jcarousel-control a:nth-child('+ liindex +')').attr("class","active"); }; function removehighlight(carousel, obejctli,liindex,listate){ jQuery('.jcarousel-control a:nth-child('+ liindex +')').removeAttr("class","active"); }; jQuery('#mycarousel').jcarousel({ initCallback: mycarousel_initCallback, auto: 5, wrap: 'both', vertical: true, scroll: 1, buttonNextHTML: null, buttonPrevHTML: null, animation: 1000, itemVisibleInCallback: highlight, itemVisibleOutCallback: removehighlight }); }); Thanks in advance

    Read the article

  • Private Git repo using Smart HTTP with LDAP authentification

    - by ALOToverflow
    I've been crawling the interwebz and getting my hands dirty for the last few days, but I can't seem to make it all work together. I managed to get a HTTP repo working with Ubuntu 10.04 over Smart HTTP (pull and push over HTTP) for a single repo. This means that I do the initial setup over SSH to the server (git init --bare) and after that the clients can pull and push to it (git clone http://servername/allgitrepos/repo.git). Unfortunately it's impossible to add a new repo without SSHing to the server and adding it manually) i.e. git push http://servername/allgitrepos/repo2.git (allgitrepos is available for everyone to read-write and execute) would fail talking about git update-server-info (which seems to be a general error message). So far the repository is anonymous, so I would like to authenticate using LDAP and also use the LDAP creds to make the git commit. So, how can I push new repos to the server and how can I use the LDAP creds to make the git commit. Thanks

    Read the article

  • Solutions for exporting a remote desktop app (display and audio)

    - by Richard
    I'm looking for a solution that will allow me to export a desktop app running on a server to a client machine. The server is ideally Linux, the desktop is Windows (+Mac for icing on the cake). The export should be encrypted and I need to support multiple clients from one server. I only want to export an individual app, not a whole desktop, and ideally am looking for open source solutions. The obvious, cheapest, simplest choice is to use X tunnelled over ssh (e.g using Xming on the desktop) but X doesn't support audio. What are the alternatives? Or is there a way to support audio using X or in parallel to X? Thanks

    Read the article

  • connecting to secure database from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. The option of switching the database onto public hosting is not an option, so I have to work with the sdcenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • Why are my log in times taking so long in Linux?

    - by Jamie
    In recent weeks, login times on my Ubuntu server have started timing out; both through SSH and the local command line console. Examination of the /var/auth.log yields nothing interesting. How can I diagnose long log in times on my Ubuntu server? I should mention, also, that no updates have been performed since the problem has started, and that the /, /boot/ and /usr/ file systems are mounted as readonly. [Edit] This is a stand alone machine, so it doesn't authenticate with Active Directory, LDAP etc. Also, the login prompt is responsive, as is the password prompt. Upon typing the password then CR, I'll timeout. After four a five tries, I will be able to login, although I'm worried this will start taking longer.

    Read the article

  • connecting to secure database on private network from website host

    - by jim
    Hello all, I've got a requirement to both read and write data via a .net webservice to a sqlserver database that's on a private network. this database is currently accessed via a vpn connection by remote client software (on standard desktop machines) to get latest product prices and to upload product stock sales. I've been tasked with finding a way to centralise this access from a webservice that the clients then access, rather than them using the vpn route to connect directly to the database. My question is related to my .net service's relationship to the sqlserver database. What are the options for connecting to a private network vpn from a domain host in order to achive the functionality of allowing the webservice to both read and write data to the database. For now, I'm not too concerned about the client connectivity and security (tho i appreciate that this will have to be worked out too), I'm really just interested in discovering the options available in order to allow my .net webservice to connect to the private network in as painless and transparent a way as posible. [edit] the webservice will also be available to the retail website in order for it to lookup product info as well as allocate stock transfers to the same sqlserver db. it will therefore be located on the same domain as the retail site The option of switching the database onto public hosting is not feasible, so I have to work with the scenario as described above for now, unless there's a compelling rationale presented to do otherwise. thanks all... jim

    Read the article

  • Determine which version of linux/unix/darwin I have

    - by John
    I have root ssh/terminal access to a linux server. How do I determine which version of centos I have? Some people suggested I run the command cat /etc/redhat-release but I got an error saying file not found. In fact, i'm not entirely sure i'm even using CentOS. That's what some suggested it might be. Here's a list of commands I tried that gave me no file or directory error: cat /etc/*release* cat /etc/*version* cat /proc/*version* cat /proc/*release* Here's a list of linux commands that do not exist: lsb_release: command not found wget: command not found yum: command not found

    Read the article

  • When is factory method better than simple factory and vice versa?

    - by Bruce
    Hi all Working my way through the Head First Design Patterns book. I believe I understand the simple factory and the factory method, but I'm having trouble seeing what advantages factory method brings over simple factory. If an object A uses a simple factory to create its B objects, then clients can create it like this: A a = new A(new BFactory()); whereas if an object uses a factory method, a client can create it like this: A a = new ConcreteA(); // ConcreteA contains a method for instantiating the same Bs that the BFactory above creates, with the method hardwired into the subclass of A, ConcreteA. So in the case of the simple factory, clients compose A with a B factory, whereas with the factory method, the client chooses the appropriate subclass for the types of B it wants. There really doesn't seem to be much to choose between them. Either you have to choose which BFactory you want to compose A with, or you have to choose the right subclass of A to give you the Bs. Under what circumstances is one better than the other? Thanks all!

    Read the article

  • Apt Stalls When Using HTTP Sources

    - by UltraNurd
    I was getting some to me inexplicable behavior from apt-get/aptitude on an admittedly crusty old webserver. While it was otherwise running fine, as soon as I tried a package upgrade, after a downloading a few updates it would stall completely, then my SSH session hung (and I was unable to reconnect), thus requiring a hard restart. First, I switched to a different package source in /etc/apt/sources.list, but still got the same behavior. At this point I was assuming the NIC was dying in some weird way... but as soon as I changed the package source to use FTP instead of HTTP, everything worked fine, and I was able to upgrade. For now I'm not too concerned since I have an easy work around, but it implies that there's something very weird with my network setup, since it seems to be protocol (or port?) specific. I didn't think any of my NAT setup would affect outbound traffic, but I could be crazy. Any ideas what I should try to look for?

    Read the article

  • Pump Messages During Long Operations + C#

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? Thanks

    Read the article

  • Why I get this error in CXF

    - by Milan
    I want to make dynamic web service invoker in JSF with CXF. But when I load this simple code I get error. The code: JaxWsDynamicClientFactory dcf = JaxWsDynamicClientFactory.newInstance(); Client client = dcf.createClient("http://ws.strikeiron.com/IPLookup2?wsdl"); The error: No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; org.apache.myfaces.webapp.StartupServletContextListener Caused by: java.lang.IllegalStateException - No Factories configured for this Application. This happens if the faces-initialization does not work at all - make sure that you properly include all configuration settings necessary for a basic faces application and that all the necessary libs are included. Also check the logging output of your web application and your container for any exceptions! If you did that and find nothing, the mistake might be due to the fact that you use some special web-containers which do not support registering context-listeners via TLD files and a context listener is not setup in your web.xml. A typical config looks like this; org.apache.myfaces.webapp.StartupServletContextListener Any Idea how to solve the problem?

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >