Search Results

Search found 2788 results on 112 pages for 'ben martin'.

Page 28/112 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Setting CATALINA_OPTS for tomcat6 on windows doesn’t work.

    - by Ben
    (I've copied this from Stack Overflow here, after someone suggested I post the question here) Hi, I'm trying to setup Tomcat6 to work with JMX on Windows Vista 64. To do that I need to pass the parameters below to Tomcat6. What I do in command prompt. (that doesn't work) set CATALINA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9898 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false" tomcat6.exe What I do that does work (but causes other problems) java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9898 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -jar bootstrap.jar It seems as if tomcat is just ignoring the environment variable CATALINA_OPTS. Am I doing something wrong? I've also tried to edit catalina.bat and define the variable CATALINA_OPTS there. No success. (tried adding the parameters to JAVA_OPTS too, no success either) Thanks in Advance!!

    Read the article

  • PHP Configuration file won’t load IIS7 x64

    - by Martin Murphy
    using Fast CGI I can't get it to read the php.ini file. See my phpinfo below. System Windows NT WIN-PAFTBLXQWYW 6.0 build 6001 Build Date Mar 5 2009 19:43:24 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--with-snapshot-template=d:\php-sdk\snap_5_2\vc6\x86\template" "--with-php-build=d:\php-sdk\snap_5_2\vc6\x86\php_build" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--with-pdo-oci=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=D:\php-sdk\oracle\instantclient10\sdk,shared" "--enable-htscanner=shared" Server API CGI/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path C:\Windows Loaded Configuration File (none) Scan this dir for additional .ini files (none) My php.ini file is residing in both my c:\php and my c:\windows I've made sure it has read permissions in both places from network service. I've added various registry settings, Environment Variables and followed multiple tutorials found on the web and no dice as of yet. Rebooted after each. My original install was using MS "Web Platform Installer" however I have since rebooted. Any ideas on where to go from here would be most welcome.

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • Combine multiple unix commands into one output

    - by Ben McCormack
    I need to search our mail logs for a specific e-mail address. We keep a current file named maillog as well as a week's worth of .bz2 files in the same folder. Currently, I'm running the following commands to search for the file: grep [email protected] maillog bzgrep [email protected] *.bz2 Is there a way combine the grep and bzgrep commands into a single output? That way, I could pipe the combined results to a single e-mail or a single file.

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • Task Sequence boots to logon screen instead of task sequence mode

    - by Ben M.
    I'm running a task sequence, and so that users don't accidentally interfere, I have the task sequence reboot to currently installed operating system, which as I understand, is supposed to boot to a sort of single user mode and all that shows is task sequence progress. However, this does not happen, it boots up like normal and comes to the logon screen and the task sequence runs in background. How can I adjust this behavior to the desired result?

    Read the article

  • Do i need a dedicated server for load balancing?

    - by Ben
    I'm completely new to the concept of load balancing so i hope this question isn't a "stupid question" because i've been searching around and im having a hard time understanding this. So to my understanding, in order to load balance, i need a separate machine with an ip address i can direct all traffic to. I initially thought i needed to rent 3 dedicated servers, one for load balancing and the other two as backend servers. Would a dedicated server be too much for a load balancer or do hosting companies have special types of computers for that process? Then i read somewhere else that i can install a load balance software in both of the two servers and configure it in a way that doesn't require me to rent another machine/dedicated server for load balancing. So im a bit confuse on how to actually implement a load balancer and whether or not i need a dedicated server for the sole purpose of acting as a load balancing machine. Also, i was recommended to use HAproxy so i'll be heading that direction for load balancing.

    Read the article

  • PostgreSQL timezone does not match system timezone

    - by Martin C.
    I have several PostgreSQL 9.2 installations where the timezone used by PostgreSQL is GMT, despite the entire system being "Europe/Vienna". I double-checked that postgresql.conf does not contain timezone setting, so according to the documentation it should fallback to the system's timezone. However, # su -s /bin/bash postgres -c "psql mydb" mydb=# show timezone; TimeZone ---------- GMT (1 row) mydb=# select now(); now ------------------------------- 2013-11-12 08:14:21.697622+00 (1 row) Any hints, where the GMT timezone could come from? The system user does not have TZ set and the /etc/timezone and /etc/timeinfo seem to be configured correctly. # cat /etc/timezone Europe/Vienna # date Tue Nov 12 09:15:42 CET 2013 Any hints are appreciated, thanks in advance!

    Read the article

  • SharePoint 2010 Search Error 0x800703fa

    - by Ben
    We have migrated from SharePoint 2007 to 2010. Everything appears to be working correctly except for an intermitent error with search. Occastionally search results will crash for all of our sites and when we look up the coorliation id we get the following error: Exception when fetching results: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: Illegal operation attempted on a registry key that has been marked for deletion. (Exception from HRESULT: 0x800703FA) (Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is: System.Runtime.InteropServices.COMException: Illegal operation attempted on a registry key that has been marked for deletion. (Exception from HRESULT: 0x800703FA) at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) at Microsoft.Office.Server.Search.Query.KeywordQueryInternal.Execute() at Microsoft.Office.Server.Search.Query.QueryInternal.Execute(QueryProperties properties) at Microsoft.Office.Server.Search.Administration.SearchServiceApplication.Execute(QueryProperties properties) at SyncInvokeExecute(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc) We reset IIS and the problem resolves itself for a while. Has anyone come across a perminant fix for this?

    Read the article

  • New 3TB HDD, can see full 2.7TB in Linux and Windows, but shows up as 801.6GB in BIOS

    - by Ben Lee
    I recently purchased a Seagate Barracuda 3TB drive (ST3000DM001). After installing it, my BIOS recognized it but reported the size as 801.6gb. I went ahead and booted into Linux anyway (Ubuntu 11.10 64-bit). Linux saw it as a 2.7TB. Following some online instructions (don't have the link handy, unfortunately), it looks liked converting this drive to GPT was recommended. So I used gparted to do that, then formatted it to NTFS also using gparted. (I'm using NTFS because my machine is daul-boot and I want to have access to the drive in Windows too). I rebooted to Windows (Windows 7 64-bit), and Windows also sees the drive with 2.7TB free. Everything seems to be working fine. The only issue is that my BIOS is still reporting the drive as 801.6GB. My motherboard is an ASRock 770 Extreme3 and BIOS is the latest version. Since everything seems to be working with the new drive anyway, I'm hoping that the fact that the BIOS is reporting the wrong size is not an actual problem. But honestly, I don't really know. Anyone out there more familiar with this know if this could potentially cause any problems in the future? Any way to get the BIOS to report the correct size?

    Read the article

  • Endian Destination NAT

    - by Ben Swinburne
    I have installed Endian Community Firewall 2.3 and am clearly misunderstanding/doing something wrong with it. I'm trying to create some destination NAT rules to allow incoming connections to various services within the network. Router - RED I/F - x.x.x.x Router - GREEN I/F - 192.168.11.253 ECF - RED I/F - 192.168.11.254/24 ECF - GREEN I/F - 192.168.12.254/24 Target server - 192.168.12.1 Please ignore the haphazard choice of subnets and addresses- I'm trying to quickly plop Endian into an existing network before a complete rework in 6-12 months so for now. Everything works except destination NAT, so outgoing connections are fine, the routes between the two subnets are OK etc. I want to create various incoming NATs but let's take for the sake of argument, SMTP port 25 from the Internet to Target server 192.168.12.1. I've tried almost every combination of options in the Destination NAT section to achieve this and clearly am doing something wrong. I suspect my confusion must be somewhere in the Access From and/or Target section. The rest seems OK Filter Policy = Allow Service = SMTP Protocol = TCP Port = 25 Translate to type = IP DNAT Policy = NAT Insert IP = 192.168.12.1 Port Range = 25 Enabled = Checked Position = First I can't work out what I'm doing wrong, or am I doing it right and it's just not working!? Any help would be greatly appreciated.

    Read the article

  • MacBookPro running Windows 7: accidental trackpad input is driving me mad

    - by Ben Hammond
    I am running Windows7 Professional 64bit on a 2010 MacBookPro using BootCamp 3.1. I am using an external trackball. When I am typing, I accidently brushing the trackpad and accidently overtyping randomly selected pieces of text. Which is driving me mad. I have tried to install TrackPad++, but I could not get the Trackpad++ control panel to recognise that the driver software was installed. I tried TrackPad Magic, but although it gives me a system tray icon telling me it is working, but it does not appear to disable the track pad. A quick Google implies that there should be an option in the BootCamp Control Panel 'Ignore accidental Input while typing'. But I can't see one of those in my BootCamp Control Panel. Am I looking in the wrong place? Is this feature 32bit only? Is there anything else I should try?

    Read the article

  • How can Bonjour be setup to function over a VPN connection using Mac OS X — Mountain Lion Server?

    - by Ben Coppock
    I purchased Mountain Lion Server for our office thinking that Bonjour would automatically enable any computers connected via VPN to see all computers and applications (such as Bento) running on the office network. The hope was that those of us working at home would feel just like we were in the office, with all network services working transparently over the VPN connection. However, I see that Bonjour (aka mDNS) is not enabled to work over the VPN by default. Can I configure Mountain Lion Server to automatically pass Bonjour traffic over the VPN? Is there any reason not to do this?

    Read the article

  • freebsd-update reports an upgraded jail as not upgraded

    - by Martin Torhage
    I've set up a "Service Jail" in FreeBSD 8.0 according to the FreeBSD Handbook (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html). After upgrading the host to the latest patch level and then performed a jail-upgrade, freebsd-fetch still reports that there are files in need of an update in the jail. Is this expected? Then how do I know if a jail is up to date? This is what I've done in more detail: After the initial setup of the jail freebsd-update fetch reported that there were no updates available neither in the host system nor in the jail. This was expected. A while later freebsd-update fetch reported that the following files where in need of an update both in the host and in the jail. /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a I updated the host and followed the upgrade guide for the jail (http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/jails-application.html#JAILS-SERVICE-JAILS-UPGRADING). freebsd-update fetch now reports that there are no updates available in the host but the following is the output from freebsd-update fetch in the jail: [root@bb /]# freebsd-update fetch Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 8.0-RELEASE from update5.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. Preparing to download files... done. The following files are affected by updates, but no changes have been downloaded because the files have been modified locally: /var/db/mergemaster.mtree The following files will be updated as part of updating to 8.0-RELEASE-p2: /usr/lib/libssl.a /usr/lib/libssl_p.a /usr/lib/libzpool.a /usr/lib32/libssl.a /usr/lib32/libssl_p.a /usr/lib32/libzpool.a Shouldn't freebsd-update know that the jail is up to date or have I failed upgrading it? How am I supposed to know if a jail is up to date if freebsd-update can't tell? I'm sure I ran make cleandir twice before make buildworld. TIA

    Read the article

  • High load average, low CPU and IO (Centos 5.7)

    - by Ben
    A Drupal 7 site with CiviCRM, after running smoothly for a year on a 1&1 VPS suddenly became unresponsive. Now pages eventually load, but can take more than a minute. Looking at resource use in Virtuozzo, the load average carries a warning, and has remained above 1. While I understand this isn't particularly high, this is a change from when the site was working. Here is a typical snapshot of top: top - 03:10:32 up 3:21, 1 user, load average: 1.16, 1.22, 1.30 Tasks: 43 total, 1 running, 42 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.1%sy, 0.1%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2097152k total, 1015112k used, 1082040k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached CPU idle level never seems to go much below 70%. wa is virtually always at 0. There appears to be lots of free memory. And here is some vmstat output, again showign wa at 0, plenty of free memory, and an idle CPU: procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 1100872 0 0 0 0 2783 23672 0 538 1 0 99 0 0 1 0 0 1100872 0 0 0 0 0 16 0 101754 0 0 100 0 0 0 0 0 1100872 0 0 0 0 0 17 0 103133 0 0 100 0 0 0 0 0 1100872 0 0 0 0 0 1 0 102080 0 0 100 0 0 1 0 0 1100872 0 0 0 0 0 6 0 99881 0 0 100 0 0 0 0 0 1100872 0 0 0 0 0 1 0 105187 0 0 100 0 0 I've spoken to 1&1 but they don't have any ideas as to what could be causing the high load average. Instead they suggested an upgrade :) I've looked for processes that might be causing this, examined MySQL showprocesslist, and restarted the container with no result. Does anyone have more troubleshooting suggestions or insights?

    Read the article

  • Migrating Email to new hosting.

    - by Ben C
    I've made a site for a charity, and now have to move hosting for them. They have 5 or so email addresses on their current hosting account, which will of course need to move too. What's the best way to migrate their email addresses to the new server without too much hassle for them? They use POP3, so should I just create the account on the new server and then get them to update their settings? That won't remove their old emails from Outlook Express, will it?

    Read the article

  • Can't connect to local MySQL server through socket

    - by Martin
    I was trying to tune the performance of a running mysql-server by running this command: mysqld_safe --key_buffer_size=64M --table_cache=256 --sort_buffer_size=4M --read_buffer_size=1M & After this i'm unable to connect mysql from the server where mysql is running. I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) However, luckily i can still connect to mysql remotely. So all my webservers still have access to mysql and are running without any problems. Because of this though i don't want to try to restart the mysql server since that will probably mess everything up. Now i know that mysqld_safe is starting the mysql-server, and since the mysql server was already running i guess it's some kind of problem with two mysql servers running and listening to the same port. Is there some way to solve this problem without restarting the initial mysql server? UPDATE: This is what ps xa | grep "mysql" says: 11672 ? S 0:00 /bin/sh /usr/bin/mysqld_safe 11780 ? Sl 175:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 11781 ? S 0:00 logger -t mysqld -p daemon.error 12432 pts/0 R+ 0:00 grep mysql

    Read the article

  • Any ideas out there as to how the data can be recovered from an SSD?

    - by ben
    A friend had some form of catastrophic failure on an HP mini 1000, unbootable. Of course there was data that wasn't backed up. I've removed the SSD and hooked it up to a ZIF 40 enclosure but can not seem to get the drive to be recognized in Windows 7. In Disk Management it displays as present, but uninitialized. Attempting to initialize it presents an error Virtual Disk Manager - "The device is not ready". There is scant information on MIE (the custom OS), so I'm not even sure what kind of file system I'm dealing with. In any case, if the filesystem is indeed some flavor other than FAT or NTFS, is this error consistent with that? Are there any creative ideas out there as to how the data can be recovered? Update: Thanks for all the suggestions! I hadn't even considered running a live cd. Unfortunately no luck with Ubuntu (live cd) or explore2fs. The zif connection seems ok (color coded green led for proper connect, orange for not). The drive can't be initialized and therefore can't be formatted, so I guess there may be some real damage. Probably needs to head to a specialist. Thanks again for the feedback, much appreciated.

    Read the article

  • Windows somehow got convinced that a file is a folder. How do I change it back?

    - by Ben Collins
    I recently moved an external drive from my usb-enabled router to my desktop machine, and ran into some permissions-related issues. A number of files were giving me errors when I tried to take ownership or set permissions, and in all my fiddling on a particular file, it somehow got switched to a folder. Anyone have any idea how this might have happened, and how to flip it back? Here's a screenshot:

    Read the article

  • TicTac Photo and Windows 7

    - by Ben
    Hello, My wife has been creating a tictac photo album. I had to upgrade to windows 7 as i had enough of Vista so i backed up the tic tac photo file and the photos to an external hard disk and performed a fresh install of win7. Now here is the problem. TicTacPhoto says it can find the photos in the album. The locations were as follows: Vista: C:\Users\Kelly\Pictures Win 7 C:\Users\Kelly\My Pictures When i try to create a Pictures folder under Kelly it popups a message about merging the two folders and simply moves the pictures to the My Pictures folder. Does anyone know a way to make a foler called pictures so i can eliminate the file path problem and then try again with tic tac photo support to get them to fix my file. My wife is going to kill me as its our wedding album and she has spent upwards of 30hrs designing it and me upgrading to win 7 means its all my fault. She does not understand file paths etc. Im going to try and open the album file in a text editor and see if i can see anything but thought i would ask here as well. Any help appreciated.

    Read the article

  • Diagnosing laptop charging issues

    - by Ben
    My HP dv6000 laptop will not run on battery power. It will only run while plugged into the wall. I have already replaced the battery, but it still will not operate on battery power. The problem began slowly. The original battery's life got shorter and shorter, until eventually it would only last ~45 seconds before going into hibernation. Finally, the battery would not work at all, so I threw it out and just used it without a battery. I bought a replacement recently.... but the problem persists. When I connect the new battery to the system, the "charging" light turns on for about 30 seconds, then goes dark. I'm assuming there's something wrong with the internal charging circuitry? If this is the case, how would I go about fixing this, short of buying a new motherboard? If anyone could give me some assistance in confirming/rejecting my diagnosis I'd be very appreciative! :)

    Read the article

  • Need to increase nginx throughput to an upstream unix socket -- linux kernel tuning?

    - by Ben Lee
    I am running an nginx server that acts as a proxy to an upstream unix socket, like this: upstream app_server { server unix:/tmp/app.sock fail_timeout=0; } server { listen ###.###.###.###; server_name whatever.server; root /web/root; try_files $uri @app; location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } Some app server processes, in turn, pull requests off /tmp/app.sock as they become available. The particular app server in use here is Unicorn, but I don't think that's relevant to this question. The issue is, it just seems that past a certain amount of load, nginx can't get requests through the socket at a fast enough rate. It doesn't matter how many app server processes I set up, it doesn't even matter what the app is (tried it with a dummy app with just a single endpoint that returned an empty page with status 404). The bottleneck seems to be the socket, not the app. I'm getting a flood of these messages in the nginx error log: connect() to unix:/tmp/app.sock failed (11: Resource temporarily unavailable) while connecting to upstream Many requests result in status code 502, and those that don't take a long time to complete. The nginx write queue stat hovers around 1000. Anyway, I feel like I'm missing something obvious here, because this particular configuration of nginx and app server is pretty common, especially with Unicorn (it's the recommended method in fact). Are there any linux kernel options that needs to be set, or something in nginx? Any ideas about how to increase the throughput to the upstream socket? Something that I'm clearly doing wrong? Additional information on the environment: $ uname -a Linux app1 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ ruby -v ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-linux] $ unicorn -v unicorn v4.3.1 $ nginx -V nginx version: nginx/1.2.1 built by gcc 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) TLS SNI support enabled Current kernel tweaks: net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_mem = 16777216 16777216 16777216 net.ipv4.tcp_window_scaling = 1 net.ipv4.route.flush = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 net.core.somaxconn = 8192 net.netfilter.nf_conntrack_max = 131072

    Read the article

  • Launch an arbitrary application with a specific icon

    - by Camilo Martin
    I'm thinking about customizing my Windows 7's application icons, so that they all follow a specific theme (Token-like icons). This would involve using Resource Hacker (or similar) to patch each .exe. Not only this would be tiresome, but also it would require doing it again at each update of each application (nevermind that some could break just because it was tampered with). Instead, is there any way to launch an application with a specific icon? Ideally it would be something from the command-line (so I can make a shortcut), like this: launchwithicon.exe --app C:\myapp.exe --icon C:\myicon.ico Note that while it is possible to do something similar by setting the taskbar to "always combine, hide labels", I do not like this approach and instead am looking for something that works without combining the taskbar icons.

    Read the article

  • Can't Install Windows 2008 R2 on Lenovo Laptop With SSD

    - by Ben
    I am trying to install Windows Server 2008 R2 on a new Lenovo X201 or T410 laptop. Setup halts with the following pop-up: A required CD/DVD device driver is missing. If you have a driver floppy disk, CD, DVD, or USB flash drive, please insert it now. Note: If the windows installation media is in the CD/DVD drive you can safely remove it for this step. The CD drive is obviously working, as it's booting from CD to get to this point. The only thing I can think is that it is to do with the fact they have SSD disks in - but that's just a guess. (Edit - One extra thing that may or may not be relevant: it's the 64 bit version of Server 2008 R2)

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >