Search Results

Search found 865 results on 35 pages for 'packaging and patching'.

Page 25/35 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • addins deployment

    - by user326198
    Hello Everyone, we have a product that work as standalone and Clickonce , we created some components as addins in the system (based on microsoft System.addin) we need a mechanism to update this addins on the customers in the two cases stand and click once I'm thinking for the stand alone we just send the customer a CD to update the addins and I'm thinking also to deploy the addins as Packages like System.IO.Packaging so I can know the version of addin update it or delete it but How I will Achieve this in click once the user will just press update in the addin manager in the application How I can manage the versioning and updating this addins I hope to help me to architect this structure of addins update

    Read the article

  • Compiling the icu sqlite extension statically linked to icu.

    - by Georg
    I want to compile the icu sqlite extension statically linked to icu. This is what I've tried, maybe the mistake is obvious to you. cd icu/source ./runConfigureIcu Linux --enable-static --with-packaging-format=archive ... make cd ../../icu-sqlite gcc -o libSqliteIcu.so -shared icu.c -I../icu/source/common -I../icu/source/i18n -L ../icu/source/lib -lsicuuc -lsicui18n -lsicudata ... sqlite3 .load "libSqliteIcu.so" Undefined symbol utf8_countTrailBytes Files icu sqlite extension Download icu.c from sqlite.org ICU 4.2.1 Download ICU4C from icu-project.org My Requirements Runs on Linux & Windows Only one file that I have to distribute: libSqliteIcu.so. Any idea what else I can try? Documentation Sqlite ICU extension's readme ICU's readme

    Read the article

  • nunit2 Nant task always returns exit code 0 (TeamCity 5.0)

    - by Jonathan
    Hello, I just cannot for the life of me get my nant build file to terminate upon a test failure and return (thus preventing the packaging and artifact step from running) This is the unit part of the nant file: <target name="unittest" depends="build"> <nunit2 verbose="true" haltonfailure="false" failonerror="true" failonfailureatend="true"> <formatter type="Xml" /> <test assemblyname="Code\AppMonApiTests\bin\Release\AppMonApiTests.dll" /> </nunit2> </target> And regardless what combination of true/false i set the haltonfailure, failonerror, failonfailureatend properties to, the result is always this: [11:15:09]: Some tests has failed in C:\Build\TeamCity\buildAgent\work\ba5b94566a814a34\Code\AppMonApiTests\bin\Release\AppMonApiTests.dll, tests run terminated. [11:15:09]: NUnit Launcher exited with code: 1 [11:15:09]: Exit code 0 will be returned.1 Please help as i don't want to be publishing binarys where the unit tests have failed!!! TeamCity 5.0 build 10669 AppMonApiTests.dll references nunit.framework.dll v2.5.3.9345 unit isn't installed on the build server or GAC'd Using Nant-0.85 and Nantcontrib-0.85 Thanks, Jonathan

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • Unable to read .rtf file in VS.NET 2008 Setup

    - by constant learner
    Hello All I have created a simple windows application in .NET 2008. Im packaging the same into a setup file using .NET Setup and Deployment. Also i am customizing it to include a License Agreement UI. And i am pointing the License Agreement window to read the license.rtf file which is being included in the Application folder. After successful build, if i run the setup file i can see the License Agreement window but i cannot see the content of my file. Any ideas what is the issue behind this ? Regards CL

    Read the article

  • How do I specify the com+ server when registering a vb6 com+ application without using clireg?

    - by user85759
    I've found lots of documentation on how to install com+ components with WiX or an exported msi from dcomcnfg but the problem with these approaches is I can't see where to specify the com+ server. Currently we register the components with clireg and the -s switch which allows us to specify the com+ server like so: clireg32.exe BLEH.VBR -s COMSERVER -t BLEH.TLB -d This is messy to say the least and I've been trying to get this into some automated form of installation that doesn't involve calling a batch file full of clireg32 calls. Currently WiX is the backbone of our packaging automation so a solution with WiX would be awesome. Thanks.

    Read the article

  • How to publish multiple jar files to maven on a clean install

    - by Abhijit Hukkeri
    I have a used the maven assembly plugin to create multiple jar from one jar now the problem is that I have to publish these jar to the local repo, just like other maven jars publish by them self when they are built maven clean install how will I be able to do this here is my pom file <project> <parent> <groupId>parent.common.bundles</groupId> <version>1.0</version> <artifactId>child-bundle</artifactId> </parent> <modelVersion>4.0.0</modelVersion> <groupId>common.dataobject</groupId> <artifactId>common-dataobject</artifactId> <packaging>jar</packaging> <name>common-dataobject</name> <version>1.0</version> <dependencies> </dependencies> <build> <plugins> <plugin> <groupId>org.jibx</groupId> <artifactId>maven-jibx-plugin</artifactId> <version>1.2.1</version> <configuration> <directory>src/main/resources/jibx_mapping</directory> <includes> <includes>binding.xml</includes> </includes> <verbose>false</verbose> </configuration> <executions> <execution> <goals> <goal>bind</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <id>make-business-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>flight-dto</finalName> <descriptors> <descriptor>src/main/assembly/car-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> <execution> <id>make-gui-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>app_gui</finalName> <descriptors> <descriptor>src/main/assembly/bike-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> Here is my assembly file <assembly> <id>app_business</id> <formats> <format>jar</format> </formats> <baseDirectory>target</baseDirectory> <includeBaseDirectory>false</includeBaseDirectory> <fileSets> <fileSet> <directory>${project.build.outputDirectory}</directory> <outputDirectory></outputDirectory> <includes> <include>com/dataobjects/**</include> </includes> </fileSet> </fileSets> </assembly>

    Read the article

  • .NET WPF Application : Loading a resourced .XPS document

    - by contactmatt
    I'm trying to load a .xps document into a DocumentViewer object in my WPF application. Everything works fine, except when I try loading a resourced .xps document. I am able to load the .xps document fine when using an absolute path, but when I try loading a resourced document it throws a "DirectoryNotFoundException" Here's an example of my code that loads the document. using System.Windows.Xps.Packaging; private void Window_Loaded(object sender, RoutedEventArgs e) { //Absolute Path works (below) //var xpsDocument = new XpsDocument(@"C:\Users\..\Visual Studio 2008\Projects\MyProject\MyProject\Docs\MyDocument.xps", FileAccess.Read); //Resource Path doesn't work (below) var xpsDocument = new XpsDocument(@"\MyProject;component/Docs/Mydocument.xps", FileAccess.Read); DocumentViewer.Document = xpsDocument.GetFixedDocumentSequence(); } When the DirectoryNotFoundException is thrown, it says "Could not find a part of the path : 'C:\MyProject;component\Docs\MyDocument.xps' It appears that it is trying to grab the .xps document from that path, as if it were an actual path on the computer, and not trying to grab from the .xps that is stored as a resource within the application.

    Read the article

  • Setting "Run WWW service in IIS 5.0 isolation mode" does not persist in IIS 6

    - by Saul Dolgin
    Our IIS server was recently patched with the latest Microsoft Security Updates and since then, I am unable to enable the "Run WWW service in IIS 5.0 isolation mode" setting. This setting was enabled prior to patching and somehow changed during the updates. I have tried both using the IIS Manager console and the adsutil.vbs approach to change it. Either way, after resetting IIS for the change to take effect, when I go to verify that the isolation mode setting is enabled (true) I find that is reverts back to being disabled (false). Now... The patches have already been rolled back, however the setting still does not persist when I enable it. While I am trying to research the patches that were applied to see if there is a known issue (or perhaps a change in this setting's behavior) I was hoping someone else might have come across the same problem. Any help towards a workaround would be greatly appreciated! >cscript adsutil.vbs set W3SVC/IIs5IsolationModeEnabled TRUE IIs5IsolationModeEnabled : (BOOLEAN) True >iisreset Attempting stop... Internet services successfully stopped Attempting start... Internet services successfully restarted >cscript adsutil.vbs get W3SVC/IIs5IsolationModeEnabled IIs5IsolationModeEnabled : (BOOLEAN) False

    Read the article

  • iptables, blocking large numbers of IP Addresses

    - by Twirrim
    I'm looking to block IP addresses in a relatively automated fashion if they look to be 'screen scraping' content from websites that we host. In the past this was achieved by some ingenious perl scripts and OpenBSD's pf. pf is great in that you can provide it nice tables of IP addresses and it will efficiently handle blocking based on them. However for various reasons (before my time) they made the decision to switch to CentOS. iptables doesn't natively provide the ability to block large numbers of addresses (I'm told it wasn't unusual to be blocking 5000+), and I'm a bit cautious over adding that many rules into an iptable. ipt_recent would be awesome for doing this, plus it provides a lot of flexibility for just severely slowing down access, but there is a bug in the CentOS kernel that is stopping me from using it (reported, but awaiting fix). Using ipset would entail compiling a more up-to-date version of iptables than comes with CentOS which whilst I'm perfectly capable of doing it, I'd rather not do from a patching, security and consistency perspective. Other than those two it looks like nfblock is a reasonable alternative. Is anyone aware of other ways of achieving this? Are my concerns about several thousand IP addresses in iptables as individual rules unfounded?

    Read the article

  • AMD Fusion GPU passthrough to KVM or Xen

    - by BigChief
    Has anyone successfully gotten a passthrough working with the GPU portion of AMD's Fusion APUs (the E-350 is my target) on top of a Linux hypervisor? IE, I want to dedicate the GPU to one VM only, excluding all other VMs as well as the host. I know PCI passthrough can work with patches / kernel rebuilds for Xen and KVM. However, since the GPU is on the same chip, I don't know if the host OS will see it as PCI. I know there are a number of tangential issues here, such as: Poor Fusion drivers in Linux at the moment Unsuccessful patching efforts seem common VT-d / IOMMU is required and (from my reading) is supported on the APU, but the motherboard may not offer it KVM doesn't appear to support primary graphics cards, only secondary graphics cards (described here) However, I'd like to hear from anyone who has messed with this, even failed attempts. Fedora + KVM is my preferred virtualization platform but I'm willing to change that if it makes a difference. EDIT: The goal is to do this for a Windows 7 guest (I know it's asking a lot). Regardless, just assume this is HVM, not PV.

    Read the article

  • Is there way to find when self signed certificate will expire for Adobe Air application?

    - by tyler
    Hi, I have to release my Adobe Air application but the build process was "setup" by a different developer. (He made a self signed cert and wrote a batch file to call adt for packaging the application). Adobe mentions that such self signed certificates are valid for 5 years. Now I have no idea when that certificate will expire as I don't know when it was created. Also will my installed application stop working on expiry or only new installations will fail ? Thanks.

    Read the article

  • How to publish the jars to repository after creating multiple jars from single jar using maven assem

    - by Abhijit Hukkeri
    Hi I have a used the maven assembly plugin to create multiple jar from one jar now the problem is that I have to publish these jar to the local repo, just like other maven jars publish by them self when they are built maven clean install how will I be able to do this here is my pom file <project> <parent> <groupId>parent.common.bundles</groupId> <version>1.0</version> <artifactId>child-bundle</artifactId> </parent> <modelVersion>4.0.0</modelVersion> <groupId>common.dataobject</groupId> <artifactId>common-dataobject</artifactId> <packaging>jar</packaging> <name>common-dataobject</name> <version>1.0</version> <dependencies> </dependencies> <build> <plugins> <plugin> <groupId>org.jibx</groupId> <artifactId>maven-jibx-plugin</artifactId> <version>1.2.1</version> <configuration> <directory>src/main/resources/jibx_mapping</directory> <includes> <includes>binding.xml</includes> </includes> <verbose>false</verbose> </configuration> <executions> <execution> <goals> <goal>bind</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <id>make-business-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>flight-dto</finalName> <descriptors> <descriptor>src/main/assembly/car-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> <execution> <id>make-gui-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>app_gui</finalName> <descriptors> <descriptor>src/main/assembly/bike-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> Here is my assembly file <assembly> <id>app_business</id> <formats> <format>jar</format> </formats> <baseDirectory>target</baseDirectory> <includeBaseDirectory>false</includeBaseDirectory> <fileSets> <fileSet> <directory>${project.build.outputDirectory}</directory> <outputDirectory></outputDirectory> <includes> <include>com/dataobjects/**</include> </includes> </fileSet> </fileSets> </assembly>

    Read the article

  • when does a software become "proprietary" ?

    - by wefwgeweg
    say a company is using Open source libraries, or programs, and packaging it into a proprietary solution. or perhaps, the engineers have copy pasted certain section of those open source libraries and have compiled it now, into a very useful "proprietary" software suite. what legal troubles will this company face if any ? are you allowed to do this ? i mean the customer doesn't see the source codes, only runs the binary files on their computer. for example, i find an excellent NLP library in python, and decide to use it in my program that i am selling for $4000 USD (i write like 10 lines of code and let the library do the work). could i get into trouble ? would i need to write the NLP library myself from scratch to be considered "proprietary" ? danke

    Read the article

  • Kernel Memory Leak in Ubuntu 9.10?

    - by kayahr
    After some days of work (Using suspend-to-ram during the night) I notice I loose more and more available memory. Even when I close all applications the situation doesn't improve. I even went down to the command line and closed ALL running processes except the init process and the bash I'm working in. I unmounted all these ram disks which Ubuntu is using, I even unloaded all modules which could be unloaded. But still "free" tells me that 1 GB of RAM is used (without buffers/cache). In "top" there is no visible process which occupies all this memory. The only way to free the memory is restarting the machine. How can I find out where I lose all this memory? Is there a known "suspect" who can cause a problem like this? I'm using Ubuntu 9.10 64 bit on a Dell Latitude E6500 (4 GB RAM) with the latest closed-source nvidia driver and Gnome with Compiz. The applications I use most of the time are firefox and eclipse. Any hints how I can find the problem? I'm not a kernel hacker so if the solution is patching the kernel or something like that then I might be out of the game...

    Read the article

  • Xcode: Using a custom framework

    - by Robert
    The error I'm getting: in /Users/robert/Documents/funWithFrameworks/build/Debug-iphonesimulator/funWithFrameworks.framework/funWithFrameworks, can't link with a main executable Cliff notes: trying to include framework doesn't want to link More detail: I'm developing for a mobile device... hint, hint using Xcode and I'm trying to make my own custom framework which I can include from another application. So far, I've done the following: Create a new project; an iPhone OS window based app. Go to target info- under packaging, change the wrapper extension from app to framework Go to Action-new build phase - copy headers. Change roles of headers to 'public' From my application, I add the framework to the frameworks group.

    Read the article

  • Eclipse users: Do you use Aptana too?

    - by Glenn
    This San Mateo development company makes a freely downloadable convenient packaging of many plugins for Eclipse called Aptana. I was recently in an environment where Aptana came pre-installed. Not only is it a good IDE for RoR, it also does a somewhat decent job (sans debugging) for PHP, Python, HTML, CSS, and Javascript. According to their own web site, their IDE also supports Adobe Air and the iPhone. If you are currently using Eclipse, then do you also use Aptana too? What, if any, are the drawbacks to using Aptana?

    Read the article

  • is ksplice production ready?

    - by faultyserver
    I would be interested to hear the serverfault community's experiences with Ksplice in production. Quick blurb from wikipedia: Ksplice is a free and open source extension of the Linux kernel which allows system administrators to apply security patches to a running kernel without having to reboot the operating system. and Ksplice can, without restarting the kernel, apply any source code patch that only needs to modify the kernel code. Unlike other hot update systems, Ksplice takes as input only a unified diff and the original kernel source code, and it updates the running kernel correctly, with no further human assistance required. Additionally, taking advantage of Ksplice does not require any preparation before the system is originally booted (the running kernel does not need to have been specially compiled, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. So a few questions: How has the stability been? any odd issues that you have encountered with its 'rebootless live patching' of the kernel? Kernel panics or horror stories? I have been running it on a few test systems and so far its been working as advertised, but I am interested in what other sysadmins experiences have been with Ksplice before going 'all in' and deploying this on our production servers. So, anybody using Kspice in production? update: hmm, not seeing any real activity on this question after a couple of hours (besides some kind upvotes and favs). Maybe to spark some activity I'll also ask a few more questions and see if we can get this discussion going... "If you are aware of Ksplice, is there a reason you are not using it?" "Do you feel its still too bleeding edge, unproven or untested?" "Does Ksplice not fit well within your current patch-management system?" "Do you hate having systems that have long (and secure) uptimes?" ;-)

    Read the article

  • Arch Linux: How to handle patches which only you will use?

    - by user12932
    I'm using freerdp together with xmonad and it has been giving me a lot of trouble. The super key (or "windows key") is my mod key in xmonad and it has been interfering with my freerdp usage rather annoyingly. Whenever I switched workspaces (or did anything else in xmonad involving the super key), windows (controlled by the freerdp instance in focus) registered a keypress as well. This event combined with the loss of focus got the super key stuck in windows indefinitely: the press of the keys d and r would first show my desktop, then open the run dialog (as if I was pressing the windows key constantly). I've tried several versions of freerdp, but all exhibited this annoying behavior. So I resorted to patching freerdp myself to just ignore the left super key on my keyboard. I love free software for a lot of reasons (especially the ability to alter things like this myself), however I still find it annoying to patch and rebuild freerdp on all version (and dependency) changes. How do you deal with situations like this? Is there even a "right way" to resolve this issue?

    Read the article

  • Cannot find maven dependency, mysterious jar files

    - by natasha
    Hi, I am trying to build a simple war file which has a few jsps. However I am coming across an odd issue, for some reason during the packaging maven is pulling 4 jar files into the WEB-INF/lib. I have trimmed down all the fat from the pom file, and have grepped for any references to these jars without any success. I cannot figure out where maven is pulling them from. I tried 'mvn dependency:build-classpath' and the classpath is empty. Please help, these jars are corrupt and I cannot deploy this war file because of them. Thanks, natasha

    Read the article

  • Standard way of distributing source code?

    - by penyuan
    I am relatively new to programming, and have built a few working C++ commandline programs with Xcode in Mac OS X (no dependencies on Mac-only libraries or APIs). My question is: What is the standard way of packaging and distributing the source code (and possibly compiled binaries)? i.e. Almost all Linux programs seemed to be distributed that a user simply needs to run ./configure && make && make install from the source directory. Thank you.

    Read the article

  • .NET Application using a native DLL (build management)

    - by moogs
    I have a .NET app that is dependent on a native DLL. I have the .NET app set as AnyCPU. In the post-build step, I plan to copy the correct native DLL from some directory (x86 or AMD64) and place it in the target path. However, this doesn't work. On a 64-bit machine, the environment variable PROCESSOR_ARCHITECTURE is "x86" in Visual Studio. My alternative right now is to create a small tool that outputs the processor architecture. This will be used by the post-build step. Is there a better alternative? (Side Note: when deploying/packaging the app, the right native DLL is copied to the right platform. But this means we have two separate release folders for x86 and AMD64, which is OK since this is for a device driver. The app is a utility tool for the driver).

    Read the article

  • RHEL 6 vs latest vanilla kernel differences?

    - by Yanko Hernández Álvarez
    What are the differences of the RHEL 6 kernel and the latest kernel.org one? I know RHEL is based on 2.6.32 with some features backported from newer kernels and that it also has other features that are not yet part of the latest vanilla kernel. Is there any comparison of the features of both kernels so I can tell how advanced is the RHEL kernel 6 vs. latest vanilla and vice versa?. It don't have to be the latest kernel at all, but the more recent the vanilla version, the better. What I want to know is: What features I lose/win if I change the RHEL kernel for the latest kernel.org’s one? What features are less matured/developed in the latest vanilla kernel than in RHEL’s (and vice versa)? (I guess KVM virtualization is one of them, but I'm not so sure.) What things (libraries / programs / etc) don’t interact as well with the latest vanilla kernel than with the RHEL’s one? In a related note: Is there ANY way to be as up to date (kernelwise) as possible (using RHEL 6) without loosing too much in the process? (Any way except doing the patching myself, I don’t have the necessary expertise) Any repo I don’t know of? Any alternative? Update: The srpm doesn't include patches (see comments), so that way is not possible. Clarification: I'm interested in how "old" the RHEL kernel gets as time goes by, and to know when the latest upstream kernel includes all the improvements included in the RHEL version.

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • CLOSE_WAIT sockets burst - perhaps because of iptables settings?

    - by Fabrizio Giudici
    I have an Ubuntu 12.04 server virtual box where basically the installed software and configuration are the default ones, plus the installation of a jetty 6 server which servers a few websites. To keep things simple I didn't install apache httpd and used iptables for exposing jetty (which runs on the 8080 port) to the port 80. These are the results of /sbin/iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain POSTROUTING (policy ACCEPT) target prot opt source destination I must confess I have a shallow comprehension of how iptables works, in particular for the different kind of chains. This thing works, but sometimes I have an explosion of sockets that stay permanently in CLOSE_WAIT state. I know about what this state means, but since I didn't write the code that manages servlets (they are handled by jetty) I can't fix the problem by patching my code. Eventually the amount of CLOSE_WAIT sockets builds up and makes the server not responsive, so I have to restart jetty. I've looked around for similar problems wth CLOSE_WAIT, and only found cases related to the programmer's code, or problems with Tomcat, not Jetty. I was wondering whether they could be related to a partially broken iptables configuration (the alternative is a bug in Jetty 6, but I first want to exclude other possible causes). Thanks.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >