Search Results

Search found 1977 results on 80 pages for 'concurrent modification'.

Page 55/80 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Problems with LDAP auth in Apache, works only for one group

    - by tore-
    Hi, I'm currently publishing some subversions repos within Apache: <Location /dev/> DAV svn SVNPath /opt/svn/repos/dev/ AuthType Basic AuthName "Subversion repo authentication" AuthBasicProvider ldap AuthzLDAPAuthoritative On AuthLDAPBindDN "CN=readonlyaccount,OU=Objects,DC=invalid,DC=now" AuthLDAPBindPassword readonlyaccountspassword AuthLDAPURL "ldap://invalid.domain:389/OU=Objects,DC=invalid,DC=domain?sAMAccountName?sub?(objectClass=*)" Require ldap-group cn=dev,ou=SVN,DC=invalid,DC=domain </Location> This setup works great, but now we want to give an LDAP group read only access to our repo, then my apache config looks like this: <Location /dev/> DAV svn SVNPath /opt/svn/repos/dev/ AuthType Basic AuthName "Subversion repo authentication" AuthBasicProvider ldap AuthzLDAPAuthoritative On AuthLDAPBindDN "CN=readonlyaccount,OU=Objects,DC=invalid,DC=now" AuthLDAPBindPassword readonlyaccountspassword AuthLDAPURL "ldap://invalid.domain:389/OU=Objects,DC=invalid,DC=domain?sAMAccountName?sub?(objectClass=*)" <Limit OPTIONS PROPFIND GET REPORT> Require ldap-group cn=dev-ro,ou=SVN,dc=invalid,dc=domain </Limit> <LimitExcept OPTIONS PROPFIND GET REPORT> Require ldap-group cn=dev-rw,ou=SVN,dc=invalid,dc=domain </LimitExcept> </Location> All of my user accounts is under: OU=Objects,DC=invalid,DC=domain All groups related to subversion is under: ou=SVN,dc=invalid,dc=domain The problem after modification, only users in the dev-ro LDAP group is able to authenticate. I know that authentication with LDAP works, since my apache logs show my usernames: 10.1.1.126 - tore [...] "GET /dev/ HTTP/1.1" 200 339 "-" "Mozilla/5.0 (...)" 10.1.1.126 - - [...] "GET /dev/ HTTP/1.1" 401 501 "-" "Mozilla/4.0 (...)" 10.1.1.126 - readonly [...] "GET /dev/ HTTP/1.1" 401 501 "-" "Mozilla/4.0 (...) line = user in group dev-rw, 2. line is unauthenticated user, 3. line is unauthenticated user, authenticated as a user in group dev-ro So I think I've messed up my apache config. Advise?

    Read the article

  • Ways to go about optimizing website performance WordPress, Amazon EC2 Apache and RDS MySQL

    - by fuzzybee
    I have 6 WordPress websites running on 1 single EC2 instance. All the the websites are connecting to databases in 1 same RDS instance. Earlier today, traffic to the largest website peaked and the RDS instance went bottle-neck - CPU utilization was 100% for over an hour. It affected all of my websites as it took them all forever to load. In order to prevent such issue from happening again, which of the following will matter most so that I invest time and effort in first of all? (I will work on all later, I just need to prioritise now) To improve caching for all websites To fine-tune the database server To fine-tune my Apache server What will be the effect on user experience for my websites? Some quick searches show that I should limit number of concurrent connections to my web server but wouldn't that prevent users from accessing my websites? More background: My largest website has 140k visits and 660k page views a month. The other 5 websites should add up much less than that. I'm using a large EC2 instance as the web server I'm using a medium RDS instance as the database server What I've already done: Use W3 Total Cache plugin for caching for most the websites, especially the largest one (I can barely anything else in terms of caching I could do for the largest website) Am I using my resources wastefully or is there simply not enough resources for my websites - or rather, how do I answer that question myself?

    Read the article

  • Tool to modify properties/metadata of a PDF? i.e. Change "Title", "Author"? Sony Reader showing som

    - by Chris W. Rea
    I own a Sony Reader PRS-600 ebook reader. I bought a ton of Manning Publications ebooks (DRM-free) recently. Many of the books are PDFs since not all the ones I wanted are available in epub format. The problem: Some of the PDF books I purchased have incorrect or missing metadata. Making things worse, the Sony Reader only displays the "Title" from the PDF metadata when displaying book titles in the reader's collection of books! The Reader doesn't display the filename. So, even though I have a PDF informatively named "Windows PowerShell In Action.pdf", it shows up as "untitled" in the Reader. Imagine how useful the Reader's list of book titles becomes when many are just "untitled" or "unnamed document" ! Yes, it is maddening. So – short of expecting the publisher to fix the files or Sony to add a filename-based list instead, I'm looking for a way to fix the PDF metadata. I can view the metadata with Adobe Reader, but it doesn't permit modification of the properties. Leading to: Question: Is there a tool – free, or cheap – and either for PC or Mac, that can modify the properties / metadata of a DRM-free PDF document? I want to correct "Title" and "Author" fields, specifically.

    Read the article

  • Win 2008 R2 terminal server and redirected printer queue security

    - by Ian
    I have a case where I need a non-priv account to be able to make a modification to the redirected printer. I know, its not advisable but we're not giving them access - changes will be made in code. So, following the docs (http://technet.microsoft.com/en-us/library/ee524015(WS.10).aspx) I modified the default security for new printer queues. This doesnt work though as windows doesn't seem to assign the privs you configure in the printer admin tool to redirected printer queues. As I test I added a non-priv test user to the default security tab in the printer admin tool (control panel - admin tools - printer admin. I assigned it all privs (its a test) and logged the user into the terminal server. The redirected printers duely appeared as usual. However if I open the printer properties - security tab, the user appears in the list of accounts/groups but the options I selected (all privs) are not set. Instead the user special privs box is marked and when I click on 'advanced options' and view them, there is nothing marked. So, something is clearing these options.... the question is, why and how can I convince it not to? Ian

    Read the article

  • Performance Test and TCP tuning

    - by Mithir
    We are in the process of performance testing an application which receives tcp requests converts them to soap requests (WCF-httpBinding) which other services work on. The server is Windows Server 2008 R2. The TCP requests are received by TcpListener instance (.NET C#). There are 3 http-binded WCF services running on the same server. We have built a performance test client which goal is to simulate multiple concurrent requests(each request has to be different and recognizable by the application). We built a test running 150 requests that run on the same time (by 150 different threads), and we noticed straight away that some requests get the TCP connection slowly, but once they get it, they act fast. A single request writes twice on the same connection- request and an application ack. Although a single request+ack can take about 150ms, the 150 test takes about 7 seconds. The Problem When we try to run this test from 2 different computers we lose requests. some clients requests are getting no connection was made because the target machine actively refused it So I got here and got convinced it was because of the backlog. I changed the TcpListener parameters and did the registry AFD backlog changes written here but it still didn't work, so I inserted all of the TCP tuning suggested plus some netsh commands which were recommended, but still no change, we still get that error. Is there anything else I need to know? Are there any other solutions?

    Read the article

  • Managing multiple Apache proxies simultaneously (mod_proxy_balancer)

    - by Hank
    The frontend of my web application is formed by currently two Apache reverse proxies, using mod_proxy_balancer to distribute traffic over a number of backend application servers. Both frontend reverse proxies, running on separate hosts, are accessible from the internet. DNS round robin distributes traffic over both. In the future, the number of reverse proxies is likely to grow, since the webapplication is very bandwidth-heavy. My question is: how do I keep the state of both reverse balancers / proxies in sync? For example, for maintenance purposes, I might want to reduce the load on one of the backend appservers. Currently I can do that by accessing the Balancer-Manager web form on each proxy, and change the distribution rules. But I have to do that on each proxy manually and make sure I enter the same stuff. Is it possible to "link" multiple instances of mod_proxy_balancer? Or is there a tool out there that connects to a number of instances, and updates all with the same information? Update: The tool should retrieve the runtime status and make runtime changes, just like the existing Balancer-Manager, only for a number of proxies - not just for one. Modification of configuration files is not what I'm interested in (as there are plenty tools for that).

    Read the article

  • JSP Content Issue in Tomcat

    - by gautam vegeta
    There is one application where I work where there are still manual builds used i.e manually moving the servlet classes and jsp files from Dev to QA and finally to Prod. This is the method used in this application which cant be changed for some wierd reasons.BTW this is not the problem. We did a manual build where we transferred jsp files from QA to PROD recently. And we noticed that the jsp file content does not correspond to the updated jsp's but have the same content to the jsp file which was present in the server prior to the deployment. We did not re-start tomcat since jsp files upon updation automatically changes its contents. This problem persisted even after 6 hours of deployment If we consider the time standards which are different which may cause some delay. So to fix this we had to individually go into every jsp file and just type something save it and delete this change and save it.Then it worked perfectly. But finally the jsp file content before and after was never changed we just did this to change the modification date. If we think in terms of timestamp problem how can this be possible coz the old jsp files which were present in the server prior to deployment was atlest one month old and the ones getting deployed were defenitely newer than that. Why did this happen? This did not happen when we did same type of deployments earlier. How can we prevent this from happening in the future.

    Read the article

  • How do I make rsync also check ctime?

    - by Benoît
    rsync detects files modification by comparing size and mtime. However, if for any reason, the mtime is unchanged, rsync won't detect the change, although it's possible to spot it by looking at the ctime. Of course, I can tell rsync do compare the whole files' contents, but that's very very expensive. Is there a way to make rsync smarter, for example by checking mtime+size are the same AND that ctime isn't newer than mtime (on both source and destination) ? Or should I open a feature request ? Here's an example: Create 2 files, same content and atime/mtime benoit@debian:~$ mkdir d1 && cd d1 benoit@debian:~/d1$ echo Hello > a benoit@debian:~/d1$ cp -a a b Rsync them to another (non-exisiting) directory: benoit@debian:~/d1$ cd .. benoit@debian:~$ rsync -av d1/ d2 sending incremental file list created directory d2 ./ a b sent 164 bytes received 53 bytes 434.00 bytes/sec total size is 12 speedup is 0.06 OK, everything is synced benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:Hello d2/a:Hello d2/b:Hello Update file 'b', same size and then reset its atime/mtime benoit@debian:~$ echo World > d1/b benoit@debian:~$ touch -r d1/a d1/b Attempt to rsync again: benoit@debian:~$ rsync -av d1/ d2 sending incremental file list sent 63 bytes received 12 bytes 150.00 bytes/sec total size is 12 speedup is 0.16 Nope, rsync missed the change. benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:World d2/a:Hello d2/b:Hello Tell rsync the compare the file content benoit@debian:~$ rsync -acv d1/ d2 sending incremental file list b sent 144 bytes received 31 bytes 350.00 bytes/sec total size is 12 speedup is 0.07 Gives the correct result: benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:World d2/a:Hello d2/b:World

    Read the article

  • Ant build classpath jar generates "error in opening zip file"

    - by Uberpuppy
    I have a project built in eclipse with a dependencies on 3rd party jars. I'm trying to generate a suitable build file for ant - using eclipses built-in export-ant buildfile feature as a starting block. When I run the build target I get the following error: [javac] error: error reading /base/repo/FabTrace/lib/apache/geronimo/specs/geronimo-j2ee-management_1.0_spec/1.0/geronimo-j2ee-management_1.0_spec-1.0.jar; error in opening zip file And the whole build file (auto-generated by eclipse) looks like this: (NB: the error above always references the first jar listed in the classpath) <project basedir="." default="build" name="FabTrace"> <property environment="env"/> <property name="ECLIPSE_HOME" value="/opt/apps/eclipse"/> <property name="debuglevel" value="source,lines,vars"/> <property name="target" value="1.5"/> <property name="source" value="1.5"/> <path id="JUnit 4.libraryclasspath"> <pathelement location="${ECLIPSE_HOME}/plugins/org.junit4_4.5.0.v20090824/junit.jar"/> <pathelement location="${ECLIPSE_HOME}/plugins/org.hamcrest.core_1.1.0.v20090501071000.jar"/> </path> <path id="FabTrace.classpath"> <pathelement location="bin"/> <pathelement location="lib/apache/geronimo/specs/geronimo-j2ee-management_1.0_spec/1.0/geronimo-j2ee-management_1.0_spec-1.0.jar"/> <pathelement location="lib/apache/geronimo/specs/geronimo-jms_1.1_spec/1.0/geronimo-jms_1.1_spec-1.0.jar"/> <pathelement location="lib/commons-collections/commons-collections/3.2/commons-collections-3.2.jar"/> <pathelement location="lib/commons-io/commons-io/1.4/commons-io-1.4.jar"/> <pathelement location="lib/commons-lang/commons-lang/2.1/commons-lang-2.1.jar"/> <pathelement location="lib/commons-logging/commons-logging/1.1/commons-logging-1.1.jar"/> <pathelement location="lib/commons-logging/commons-logging-api/1.1/commons-logging-api-1.1.jar"/> <pathelement location="lib/javax/activation/activation/1.1/activation-1.1.jar"/> <pathelement location="lib/javax/jms/jms/1.1/jms-1.1.jar"/> <pathelement location="lib/javax/mail/mail/1.4/mail-1.4.jar"/> <pathelement location="lib/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar"/> <pathelement location="lib/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar"/> <pathelement location="lib/junit/junit/4.4/junit-4.4.jar"/> <pathelement location="lib/log4j/log4j/1.2.15/log4j-1.2.15.jar"/> <pathelement location="lib/apache/camel/camel-jms-2.0-M1.jar"/> <pathelement location="lib/spring/spring-2.5.6.jar"/> <pathelement location="lib/apache/camel/camel-bundle-2.0-M1.jar"/> <pathelement location="lib/backport-util-concurrent/backport-util-concurrent-3.1.jar"/> <pathelement location="lib/commons-pool/commons-pool-1.4.jar"/> <pathelement location="lib/apache/camel/camel-activemq-1.1.0.jar"/> <pathelement location="lib/apache/activemq/activemq-camel-5.2.0.jar"/> <pathelement location="lib/jencks/jencks-2.2-all.jar"/> <pathelement location="lib/jencks/jencks-amqpool-2.2.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/activemq-all-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/xbean-spring-3.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/activemq-core-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/camel-jetty-2.2.0.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-util-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-xbean-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/activemq-optional-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/geronimo-servlet_2.5_spec-1.2.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-beans-2.5.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-context-2.5.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-core-2.5.6.jar"/> <path refid="JUnit 4.libraryclasspath"/> </path> <target name="init"> <mkdir dir="bin"/> <copy includeemptydirs="false" todir="bin"> <fileset dir="src/main/java"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> <copy includeemptydirs="false" todir="bin"> <fileset dir="src/test/java"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> <copy includeemptydirs="false" todir="bin"> <fileset dir="config"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> </target> <target name="clean"> <delete dir="bin"/> </target> <target depends="clean" name="cleanall"/> <target depends="build-subprojects,build-project" name="build"/> <target name="build-subprojects"/> <target depends="init" name="build-project"> <echo message="${ant.project.name}: ${ant.file}"/> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="src/main/java"/> <classpath refid="FabTrace.classpath"/> </javac> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="src/test/java"/> <classpath refid="FabTrace.classpath"/> </javac> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="config"/> <classpath refid="FabTrace.classpath"/> </javac> </target> </project> (I know there's eclipse specific stuff in here. But I get the same results with or without it.) I've done ye old google search and trawled around without success. I can confirm that all the jars do really exist. I've also tried from the commandline and as sudo - again, same results. Any help would be greatly appreciated. Cheers

    Read the article

  • Can Safari 5.1 for Mac OS display favicons for bookmarks in the Bookmarks Bar?

    - by Greg R.
    When bookmarking a web site, most contemporary browser will display the site's favicon next to the bookmark, both in the bookmark view and the bookmark toolbar. This is a useful feature. In the bookmark toolbar you can edit the name of the bookmark to be blank, effectively leaving the favicon there as an easily identifiable "button" from which to launch the bookmark. This allows you to make more effective user of the space in the bookmark toolbar. I use this approach effectively in Firefox, Chrome, and IE. For example, here is a portion of my Bookmarks Toolbar from Firefox: However, in Safari, no favicon is ever displayed for bookmarks. In the full bookmark view only a generic globe icon is displayed. In the Bookmark Bar in Safari, no icon at all is displayed. Which means the habit of removing the bookmark name & leaving the favicon is useless. Here's what the same configuration (synced between browsers via Xmarks) looks like in Safari. That blank space is where the favicons should be. The boomark is there -- if you hover over it, the blank space changes color to indicate the presence of a bookmark and a tool tip will with the URL will pop up after about two seconds. However, it's really quite unusable. So. The question: is there an extension, plug-in, or modification of some sort that will enable the display of favicons for bookmarks in Safari (OS X Lion 10.7.3 , Safari version 5.1.3)?

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • Excel 2007 - "The macro may not be available in this workbook" Error

    - by Psycho Bob
    We use an Excel sheet that has been protected to prevent modification of it from end users. All in all they are only able to edit certain tabs to add information that will then be used to generate information on other tabs using equations and such. On the tab with the equations, a button is present called "Prep for Internal Hard Copy Print." This button runs a macro that selects the information on the tab, unprotects it, then sends a print job to the user's default printer that contains the unprotected content. Normally this works like a champ. This time around, however, the macro is throwing the following error: Cannot run the macro "FILENAME.xlsx'!MacroName'. The macro may not be available in this workbook or all macros may be disabled. As far as I can tell, the macros are still present within the workbook. This sheet is normally a .xlsm though the user saved it with a different filename as a .xlsx. Also, the macros appear only as MacroName in the .xlsm file and not "FILENAME.xlsx'!MacroName' as it does in the .xlsx. Finally, when I open the .xlsm it asks if I want to enable the macro content while the .xlsx does not prompt for this. Can anyone tell me what's going on with this sheet or know of a way that I can get the macros working in the .xlsx without having to start over with a different sheet?

    Read the article

  • Keyboard issue when using kitty+puttycyg but not when using putty or cygwin alone

    - by kamaradclimber
    I would like to use a unique way to use console on my windows setup. Previously I used putty for remote access to linux servers and cygwin to have unix-like tools on windows. Then I discovered kitty which is a patched putty and have added the puttycyg patch. It provides the same way to connect to remote and local console. However, there is a strange behavior using vim when connected to the local console (using the puttycyg patch) : keys display A/B/C/D and replace the current character by these letter. In insert mode it does replace the caracter, in normal mode, no modification is made to the document even if the caracter is displayed as replaced. For instance, when I type : fixed bug with product deleted I get : fixed bbug wiwith prprodudueleteted I have read a lot of questions about this type of issue 3, 4 and googled it but there is no answer that work for me. The issue is present only for the setup kitty+puttycyg patch : cygwin alone works perfectly (and putty alone works also for access to linux servers). Any help would be appreciated !

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Are there any open source reseller packages?

    - by Tom Wright
    My department has just been given the right/responsibility to manage our own VPS. The idea being that the bureaucracy will be less for the many small web projects we run. Since each project will be managed by a different team, I was planning on approaching a shared hosting model. Are there any free pieces of software that would help automate the provision of resources each time a team request a new project? Most of the projects have identical requirements - basically LAMP - so it would be these resources that I would want provisioning (and de-provisioning, if that is a word) automatically. Ideally, there would also be a way to hook it into our LDAP authentication backend too, though I could probably make this sort of modification if necessary. Since we won't be charging our "client" however, we won't need the ability to generate invoices, handle payments, etc. etc. EDIT: Sample workflow Login authenticated against LDAP Username checked against admin group (not on central LDAP) Click 'new project' and enter project name User created on VPS with project name as username Apache virtual host created and subdomain (using project name) allocated FTP & MySQL users created

    Read the article

  • LogMeIn style remote access to NAS drive

    - by Mere Development
    I've been asked to setup some remote access to a NAS drive. The NAS drive will sit on a VLAN inside a network that uses a Cisco 891 IS router as gateway. The charity have no SSL-VPN licenses for the Cisco. At present there are no open ports or services on the Cisco itself and ideally we would like to keep it that way for a while, hence the request for a LogMeIn style service that's initiated from inside. We need multiple user access, about 10 max. Using LogMeIn on a machine connected to the NAS would only provide screen sharing I believe, and no concurrent connections (could be wrong?) The end users need to be able to read and write files to the NAS from Mac's and PC's around the globe. Read-only access from Mobile devices would be a bonus but not absolutely necessary. This is for a charity, non-commercial, but they are willing to spend if necessary. Cisco config knowledge is at a minimum so if I can avoid upsetting that delicate device I'll be happy :) Anyone have any clever ideas? I can provide more information on request. Thanks, Ben

    Read the article

  • How to move or delete files from a folder containing 2 million files on an NTFS drive?

    - by Beau
    The issue is that any modification to the directory locks up Explorer indefinitely, though Samba access to other directories still works. I've tried moving files locally and over Samba. Even enumerating the directory to get the list of files locks up the computer indefinitely. I tried using Python's win32file.FindFilesIterator to iterate the files but that also hangs. My idea was to move each file to a different directory (in a directory above the directory we're dealing with) based on its timestamp, so that we'd have at most a thousand or so files in each directory... But since I can't even enumerate the files, that's been a non-starter. If I have to give up and just nuke the directory I'm willing to do that, but a standard delete also hangs indefinitely. I have set these two parameters to increase speed and they also did not help the issue: R:\>fsutil behavior query disablelastaccess disablelastaccess = 1 R:\>fsutil behavior query disable8dot3 disable8dot3 = 1 These are all sequential images that would have run into the 'bug' with 8.3 filenames whereby many similarly named files in one directory can take a long time to compute 8.3 filenames. From what I understand this data is stored in the file system even after disable8dot3 is enabled, so it may still be contributing to the problem. Any ideas?

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • how to go about scaling a web-application ?

    - by phoenix24
    for someone whoes been primarily a web-application developer, and know not much about scaling/scalability techniques. I'll start by stating my application is written in Python, using Django; a fairly standard setup. I currently use Apache 2.2 for my webserver, and MySql for my database server; both running on the same vps server. Up until now, it was basically a prototype and merely 15-30 concurrent users at any given time; so I had no issues, but now since we'll be adding more users we'll have severe performance issues. So my question is how do i go about scaling my web-application? and my plan is as follows. Now I have just one vps server running, apache + mysql. Next, I plan to add another vps server, to run only MySql; so i'll have one web-server and one db server. Next, I'll add Memcache to the webserver for caching data; and taking some load off mysql. Next, another web-server for serving all the static content; Next, a vps server for load-balancing (nginx/varnish) behind which would be my two web-servers and then db-server. Does that sound like a workable strategy, please guide me around here.

    Read the article

  • Recommendations for handling Directory Harvesting spam on Exchange 2003

    - by Aaron Alton
    Our Exchange server is getting slammed with anywhere between 450,000 and 700,000 spam messages per day. We receive about 1700 legitimate messages in the same time frame. Roughly 75% of the spam is directory harvesting. We currently have GFI MailEssentials installed. To it's credit, it's doing a very good job, but the sheer volume of spam that we're receiving, and the number of connections that our exchange server is making is preventing legitimate email from being delivered in a timely manner. GFI is set up to check for directory harvesting at the SMTP level, which I presume intercepts the mail before it hits the Exchange services , or goes through SMSE. This "module" is ordered at the top of the list, so (hopefully) dealing with the harvesting is consuming a minimum amount of server resources and bandwidth. My question is, is there anything I can do to prevent our Exchange server's connection pool from being eaten up by these spam hosts? We had to limit the number of concurrent connections being made by Exchange, because it was consuming all of our bandwidth. Thanks, in advance.

    Read the article

  • Nginx + php-fpm - recv() error

    - by Ilya Biryukov
    I get the follow error in the nginx log [error] 17734#0: *6643 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [cut], server: [cut], request: "GET /venues HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[cut]" I have a dedicated box with 8 gb ram, quad core chip. Good server. Nginx, php-fpm & mysql all latest versions running under ubuntu 10.04 I only get this when I stress test the server with siege. If I increase the number of concurrent connections to 100, I can get up to 20% of all requests to fail. Furthermore, I don't get this on pages that have no mysql queries. And only a few failures on pages with moderate number of queries. Bit, I'm not sure if that's got to do anything with it. I have a feeling this is something to do with php. But I can't figure it out. Any suggestions of where to even start looking? Update: and the php error log is silent. No record of anything going wrong

    Read the article

  • Requests per second slower when using nginx for load balancing

    - by Ed Eliot
    I've set up nginx as a load balancer that reverse proxies requests to 2 Apache servers. I've benchmarked the setup with ab and am getting approx 35 requests per second with requests distributed between the 2 backend servers (not using ip_hash). What is confusing me is that if I query either of the backend servers directly via ab I get around 50 requests per second. I've experimented with a number of different values in ab the most common being 1000 requests with 100 concurrent connections. Any idea why traffic distributed across 2 servers would result in fewer requests per second than hitting either directly? Additional info: I've experimented with worker_processes values of between 1 and 8, worker_connections between 1024 and 8092 and have also tried keepalive 0 and 65. My main conf currently looks like this: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 2048; use epoll; } http { include /etc/nginx/mime.types; sendfile on; keepalive_timeout 0; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I've got one virtual host (in sites available) that redirects everything under / to 2 backends across a local network.

    Read the article

  • Road Warrior VPN Setup

    - by wobblycogs
    I apologise up front for the rather open ended nature of this question but I've got well out of my depth and could really do with some pointers. I need to set up a road warrior VPN solution which will allow our customers to securely access a number of services we provide for them. Customer machines will be running a variety of Windows versions from XP onwards with a variety of patch levels. Typically they will connect from the clients main offices but not always. It is safe to assume that all clients will be behind NATs but we may occasionally see a connection that isn't NAT'ed. Typical connection situation is therefore: Customer Laptop -- Router (NAT) -- Internet -- VPN Server + Firewall -- Server (Win 2008 R2, Non-routable IP) There will initially be a dozen or so people that could connect but that will grow quickly to around 100. It's unlikely that we'll see that many concurrent connections though, I imagine our total VPN throughput would be <50Mbps peak. What are my options for setting this up? I've been trying to set up a system like this using a MikroTik router for a few days but have struggled to get it working correctly, particularly with NAT'ed clients. I've had a quick look at OpenVPN and liked what I saw but I think it's unlikely our customers IT departments would allow the client to be installed. Finally I've looked at the Cisco ASA range but I'm on a fairly tight budget so this is less preferable but it looks like it would work pretty much out of the box. My fall back position is to connect the server directly and use the provided VPN + Firewall facilities but that is far from ideal as the number of servers is likely to grow over time.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >