Search Results

Search found 28784 results on 1152 pages for 'start'.

Page 854/1152 | < Previous Page | 850 851 852 853 854 855 856 857 858 859 860 861  | Next Page >

  • Passenger error: No such file or directory - config/environment.rb

    - by JJD
    I installed Redmine on MacOSX Server 10.6.8 according to this installation description. So far everything works fine: When I start webrick the server serves the Redmine pages. The gems and redmine are installed under the user "redmine". After that I aimed configuring apache2 with passenger as described here. As suggested by the description I also installed the passenger-pane which stores its virtual host configuration files in /private/etc/apache2/passenger_pane_vhosts. This is what I came up with after a lot of manual try and error. At least, now I can reach a passenger error page. // redmine.vhost.conf <VirtualHost *:80> ServerName host ServerAlias localhost DocumentRoot "/Users/redmine/Sites/redmine" # RackEnv production # RackBaseURI / RailsEnv production RailsBaseURI / # PassengerUser www-data # PassengerGroup www-data <Directory "/Users/redmine/Sites/redmine"> Order allow,deny Allow from all </Directory> </VirtualHost> However, the passenger module still runs into the following errors. Error message: No such file or directory - config/environment.rb The /var/log/apache2/error_log of the web server stated the following. [warn] NameVirtualHost *:80 has no VirtualHosts [notice] Apache/2.2.21 (Unix) Phusion_Passenger/3.0.12 configured -- resuming normal operations [ pid=21824 thr=2151905620 file=utils.rb:176 time=2012-06-01 18:22:07.126 ]: *** Exception Errno::ENOENT in PhusionPassenger::ClassicRails::ApplicationSpawner (No such file or directory - config/environment.rb) (process 21824, thread #<Thread:0x0000010086f2a8>): I experimented with the user switch functionality of passenger as described in the documentation - as you can tell from my configuration file. Though, I was not successful.

    Read the article

  • Internal disk not correctly recognised by Windows 7

    - by david
    i'm having problems configuring a disk in a brand new, clean windows-7 install. here are some system specifics- . disk- western digital velociraptor wd6000hlhx . mobo- gigabyte z77x-ud3h . bios sata-mode set to ahci [not raid], w/disk connected to sata0 [6gb/s hi-speed sata]. . windows 7 enterprise sp1 x64 the disk is recognized by bios and correctly identified [name & size ok]. the disk is also recognized by windows on a h/w level, but it won't show up in the explorer. windows reports the device is working correctly. windows disk manager shows the drive, but says it's uninitialized and has no partitions [which is incorrect]. if i try to initialize the drive, windows throws an error saying that it "cannot find the file specified". [which file???] before connecting the drive to the new machine, i partitioned and formatted the disk under windows xp sp2, giving it 2 partitions [mbr, not gpt] and copying over a boatload of data. obviously none of this appears under windows 7. removing the disk from the new machine and replacing it back in the windows xp machine shows the disk and all data are intact and functional. i'd like to have windows 7 recognize the disk w/o having to lose the data and start over. is this possible? if so, how would i do that? I checked this post, but even though the problem seems identical, the information didn't help. any help appreciated. thanks!

    Read the article

  • Apache reaching MaxClients and locking the server

    - by Rodrigo Sieiro
    Hi. I currently have an Apache2 server running with mpm-prefork and mod_php on a OpenVZ VPS with 512M real / 1024M burstable RAM (no swap). After running some tests, I found that the maximum process size Apache gets is 23M, so I've set MaxClients to 25 (23M x 25 = 575 MB, ok for me). I decided to run some load tests on my server, and the results left me puzzled. I'm using ab on my desktop machine requesting the main page from a wordpress blog. When I run ab with 24 concurrent connections, everything seems fine. Sure, CPU goes up, free RAM goes down, and the result is about 2-3s response time per request. But if I run ab with 25 concurrent connections (my server limit), Apache just hangs after a couple of seconds. It starts processing the requests, then it stops responding, CPU goes back to 100% idle and ab times out. Apache log says it reached MaxClients. When this happens, Apache keeps itself locked up with 25 running processes (they're all in "W" if I check server status) and only after the TimeOut setting the processes start to die and the server starts responding again (in my case it's set to 45). My question: is that expected behaviour? Why Apache just dies when it reaches MaxClients? If it works with 24 connections, shouldn't it work with 25, just taking maybe more time to respond each request and queueing up the rest? It sounds kinda strange to me that any kid running ab can alone kill a webserver just by setting the concurrent connections to the servers MaxClients.

    Read the article

  • cyrus-sasl-lib issue on CentOS 5.3 (while installing GUI)

    - by sxanness
    I am attempting to install gnome on a CentOS 5.3 Server install so that I can speed up the process that I am working on. I ran a yum groupinstall for the x window system and gnome but I keep getting the following error. Package cyrus-sasl-plain needs cyrus-sasl-lib = 2.1.22-4, this is not available. Package cyrus-sasl needs cyrus-sasl-lib = 2.1.22-4, this is not available. Package cyrus-sasl-plain needs cyrus-sasl-lib = 2.1.22-4, this is not available. Package cyrus-sasl needs cyrus-sasl-lib = 2.1.22-4, this is not available. Complete! First thing I checked was what version of the cyrus-sasl-lib I had installed. Installed Packages cyrus-sasl-lib.i386 2.1.22-4 installed cyrus-sasl-lib.x86_64 2.1.22-4 installed Available Packages cyrus-sasl-lib.x86_64 2.1.22-5.el5 base cyrus-sasl-lib.i386 2.1.22-5.el5 base Anyhow know how I can get around this and install the stuff I need so that I can start a GUI on this machine? Thanks in advance

    Read the article

  • COM+/Desktop Heap errors in IIS affecting sites at random?

    - by tresstylez
    We have a Win2K3 server that is hosting 30+ sites. Each site is configured to have its own unique application pool -- so that we can manually recycle specific sites if needed and not kill sessions for the others. From what I've read, the consequence of this type of setup is that each application pool worker process gets allocated a Desktop Heap (normally 512 kb's) and we limit the number of app pools we can serve. http://blogs.msdn.com/b/david.wang/archive/2006/01/25/security-considerations-of-usesharedwpdesktop-on-iis6.aspx PROBLEM: What we're seeing is that occasionally COM+ errors get triggered, presumably by hitting our 512 kb limit of the desktop heap -- and certain sites become unresponsive (or have errors) until we manually recycle that specific app pool. I know that I can increase the desktop heap limit to 1024, and make other tweaks/tunes, but I've been tasked with finding out what exactly causes one site's heap to max out as opposed to another. It seems that when we start seeing COM+ errors, the sites it affects are random -- small sites or big sites (heavier used). Is it based on process id? Traffic? Any pointers on understanding this a little more would be excellent. Thanks! jg

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • Delay before download starts when serving files using nginx

    - by glumbo
    I am currently using nginx to serve downloads off my website. Users sometimes need to wait about 5 seconds before their download starts after clicking a download link. I'm not sure if I need to start using raid 10 (I'm currently using raid 50) or if this is a problem with my nginx configuration. I am also on a 1gbit line but download sometimes go as low as 10kB/s. My server: Dual Xeon 5620 CPU, 12x2TB drives with 8GB ram. This is my nginx.conf #user nobody; worker_processes 12; worker_rlimit_nofile 10240; worker_rlimit_sigpending 32768; error_log logs/error.log crit; #pid logs/nginx.pid; events { worker_connections 2048; } http { include mime.types; default_type application/octet-stream; access_log off; limit_conn_log_level info; log_format xfs '$arg_id|$arg_usr|$remote_addr|$body_bytes_sent|$status'; #sendfile on; #tcp_nopush on; reset_timedout_connection on; server_tokens off; autoindex off; keepalive_timeout 0; #keepalive_timeout 65; limit_zone one $binary_remote_addr 10m; perl_modules perl; perl_require download.pm;

    Read the article

  • Tomcat 7 on Ubuntu 12.04 startup issues

    - by Nico Huysamen
    I am having trouble getting tomcat 7 to start up on my new VPS. I am really scratching my head since I have done this often. So I'm thinking it might be the VPS. I just got a new VPS from CINFU. After a clean install of Ubuntu 12.04 32bit, I install openjdk-6-jdk, update JAVA_HOME to point to: /usr/lib/jvm/java-1.6.0-openjdk-i386 and JRE_HOME to: /usr/lib/jvm/java-1.6.0-openjdk-i386/jre But when I try to run: ./catalina.sh run it simply outputs: Using CATALINA_BASE: /opt/tomcat/apache-tomcat-7.0.29 Using CATALINA_HOME: /opt/tomcat/apache-tomcat-7.0.29 Using CATALINA_TMPDIR: /opt/tomcat/apache-tomcat-7.0.29/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-i386 Using CLASSPATH: /opt/tomcat/apache-tomcat-7.0.29/bin/bootstrap.jar:/opt/tomcat/apache-tomcat-7.0.29/bin/tomcat-juli.jar and stops. It just hangs there doing nothing. If I run ./startup.sh && tail -f ../logs/catalina.out it gets to: Aug 24, 2012 8:38:36 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] Aug 24, 2012 8:38:36 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] Aug 24, 2012 8:38:36 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 495 ms Aug 24, 2012 8:38:36 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Aug 24, 2012 8:38:36 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 but I am unable to access anything. The request just hangs. I have also tried a few other things like explicitly exporting the paths etc in catalina.sh, and running ./startup.sh rather than catalina.sh, but the furthest I have gotten is that it finishes deploying all the WARs (the default ones that comes with tomcat like the host-manager etc), but then it hangs: Aug 24, 2012 8:47:30 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] and does nothing. Anyone have any pointers that might help? As I said, I must really be missing something stupid since this has worked on all other VPSs that we have. UPDATE I figured out that the problem is actually the fact that they use OpnVZ virtualization and that there are known compatibility problems with Java.

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

  • How do I prevent Pidgin from loading XMPP chat room history on join?

    - by Mr. Jefferson
    In Pidgin, when I join a chat room, it loads the chat room history. iChat on the Mac has a preference in the Accounts section to set a variable amount of history to load, or disable loading history entirely. How do I do the same thing in Pidgin? Is there a preference somewhere that I've missed? The object is to have the chat room start fresh each day, so I'd also be fine with disabling chat room history entirely on the server if that's possible. But I didn't see that option either when I looked in Server Admin on the server. I found this list of XMPP room types, and it looks like creating a Temporary Room might be the best way to do this, but I don't want to have to create the room manually every morning. Right now I've got Pidgin set to auto-join the room when I log in; I want it to do that without loading history. EDIT: The XMPP multi-user chat spec referenced above also contains a section on managing history. And I got this to work by pulling up the XMPP Console plugin in Pidgin, copying the <presence /> stanza it sent when I joined the room, closing the room, pasting the stanza into the console, adding the <history /> element and sending it. When I opened the room again, I had no history. But it all came back the next time! So: how do I get Pidgin to send the <history /> stanza by default?

    Read the article

  • WDS updating raid drivers in an already existing image WIM

    - by Tim
    Here is my current setup. WDS installed on Server 2008 R2 for the new driverstore and multicast features. A Windows Server 2003 32bit Standard image built to support previous DL360 models. A new HP DL360 G6 which has a new raid controller in it. I need to add the driver for the raid controller into my Server 2003 32bit standard install image but I can't seem to figure out the correct method to do so. So far I've tried the following: Mounting the image and placing the drivers into the Sysprep drivers folder, adding the PCI device codes into the sysprep.inf file and committing the changes to the image. Pushing the image to a DL360 G4, ensuring the driver is in the correct locations and re-sysprepping the image. Hoping that the new driverstore feature would magically work with 2003 (a guy can dream cant he?) Is there some standard method that I can use to update this image with the new drivers or do I need to start from scratch with an entirely new build? Thanks in advance.

    Read the article

  • Setting up multiple servers for one domain

    - by Joseph Torraca
    So I am starting up a new website and I was wondering how to set up 5 servers to host the site. I have already purchased 5 Apple XServes, one will be used as a test server and the other 4 will be for the live site. So I have read some website on the internet and they all reference using one server and installing software onto it and have that server do the load balancing. I have also read that you could use a hardware, rack-mounted system and plug the servers into that. The load balancer would then distribute the load. So I have a few questions about each: 1) How do you set up the software version and have the other servers as "slaves" and have one "master" to direct traffic? 2) Which of the two options above are more reliable, and better suited for a startup that doesn't have many users per month, yet(hopefully)? 3) Is there a theoretical max limit of servers that can be connected to a software load balancing system? Note: Obviously this will change from software to software, but in terms of the server being able to handle it? 4) In your own opinion, what are you using for your sites? Have you had any problems setting up that system or operating it once its running? Are there any things you would stay away from if you had to start over? 5) I also purchased a Apple RAID system, so if you are familiar with it, is there any way to connect it to multiple Xserves so they all serve the same data? I'm a little confused on this, so thanks for all your help and being patient with me. Note: Take it easy on me, I am learning this as I go along, so I may have used terms incorrectly or explained things that don't really make sense. Sorry. P.S. If you need me to supply the specs on the servers to determine which system makes the most sense, I can post them for you.

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. I am seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • When I restart my virtual enviorment it does not re-bind to the IP address

    - by RoboTamer
    The IP does no longer respond to a remote ping With restart I mean: lxc-stop -n vm3 lxc-start -n vm3 -f /etc/lxc/vm3.conf -d -- /etc/network/interfaces auto lo iface lo inet loopback up route add -net 127.0.0.0 netmask 255.0.0.0 dev lo down route add -net 127.0.0.0 netmask 255.0.0.0 dev lo # device: eth0 auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.22.189.58 netmask 255.255.255.248 gateway 192.22.189.57 broadcast 192.22.189.63 bridge_ports eth0 bridge_fd 0 bridge_hello 2 bridge_maxage 12 bridge_stp off post-up ip route add 192.22.189.59 dev br0 post-up ip route add 192.22.189.60 dev br0 post-up ip route add 192.22.189.61 dev br0 post-up ip route add 192.22.189.62 dev br0 -- /etc/lxc/vm3.conf lxc.utsname = vm3 lxc.rootfs = /var/lib/lxc/vm3/rootfs lxc.tty = 4 #lxc.pts = 1024 # pseudo tty instance for strict isolation lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 #lxc.cgroup.cpuset.cpus = 0 # security parameter lxc.cgroup.devices.deny = a # Deny all access to devices lxc.cgroup.devices.allow = c 1:3 rwm # dev/null lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero lxc.cgroup.devices.allow = c 5:1 rwm # dev/console lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0 lxc.cgroup.devices.allow = c 4:1 rwm # dev/tty1 lxc.cgroup.devices.allow = c 4:2 rwm # dev/tty2 lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandon lxc.cgroup.devices.allow = c 1:8 rwm # dev/random lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/* lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx lxc.cgroup.devices.allow = c 254:0 rwm # rtc # mounts point lxc.mount.entry=proc /var/lib/lxc/vm3/rootfs/proc proc nodev,noexec,nosuid 0 0 lxc.mount.entry=devpts /var/lib/lxc/vm3/rootfs/dev/pts devpts defaults 0 0 lxc.mount.entry=sysfs /var/lib/lxc/vm3/rootfs/sys sysfs defaults 0 0

    Read the article

  • I get a 502 bad gateway ONLY with a specific combination of domain/root folders - NGINX

    - by Patrick De Amorim
    I have a VPS running NGINX and virtual hosts, with a configuration such as this: Domains directing to it: lolpics.no smscloud.no idmag.no Root folders: /home/vds/www/lolpics /home/vds/www/smscloud /home/vds/www/idmag SMSCloud.no is the site that keeps getting 502 errors, but if I make the domain direct to any of the other folders, the site works, or if I make any other domain name direct to the /home/vds/www/smscloud folder, it works. Only smscloud.no with /home/vds/www/smscloud breaks I tried putting this between the http{} in my nginx.conf and no help: proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; EDIT: Well, that was slightly silly, if anyone from Google stumbles on this, here's how I fixed it, I just added this to the http{}: fastcgi_buffer_size 16k; fastcgi_buffers 16 16k; So that the start of my http block is: http { include /etc/nginx/mime.types; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; fastcgi_buffer_size 16k; fastcgi_buffers 16 16k;

    Read the article

  • Monitor Exchange Email Address and run scripts

    - by WernerCD
    Okay... Not sure how "out there" this thought is... Right now to send a pager message (aka text message), a user logs into our AS400... logs into the program... enters user name and message and hit's F10 to send. With a little looking, it seems that you can run remote commands to the AS400 via FTP. So I'm working on building a script (batch or otherwise) that, given two parameters (user, message), will FTP into the AS400 and run a remote command: c:\>ftp server user: admin password: ***** ftp> quote rcmd SNDPGRMSG TOPGR(JDOE) MSG('This is a Test') ftp> quit So... what I want to do is setup an email account on our Exchange server Monitor the account for incoming mail upon receipt of incoming mail, parse it... say for example subject is defined as "Recipient" and email text is defined as "Pager message" run a batch that uses the above mentioned TOPGR and MSG as parameters... via FTP to the AS400 mark email as "read" The main thing I'm not sure about is monitoring an exchange account and running a script on incoming emails. I'm sure what I want to do is possible... but where would I start? EDIT: Clarification The main reasons for using this four part system are logging (messages sent via this are logged and reported by the AS400 program) and the existing scheduler for redirecting pages (For example, the weekly on-call person = TOPGR(oncall) gets updated by the AS400 program). I'm also trying to remove duplicate work. If I can get this setup working, I can redirect pages from OTHER systems into this one. I then don't have to update 2, soon to be 3, systems with current phone numbers, carriers, on-call schedules, etc. System #2 and #3 can just "email" [email protected].

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • Virutal Machine loses network connectivity on Hyper V Cluster

    - by Chris W
    We're running a number of VMs on a 6 node failover cluster of blades using Hyper V. We have an intermittent issue (every few days at different times - not a fixed frequency) of VMs losing network connectivity. Console access to the VM suggests all is fine and the underlying blade has normal connectivity. To resolve the problem we either have to re-start the VM or, more usually, we do a live migration to another blade which fires up connectivity and we then migrate it back to the original blade. I've had 3 instances of this happen with a specific VM running on a particular blade however it has happened once with a different VM running on a different blade. All VMs and blades have the same basic setup and are running Windows 2008 R2. Any ideas where I should be looking to diagnose the possible causes of this problem as the event logs provide no help? Edit: I've checked that each blade is running the latest NIC drivers and all seem to be fine. Something that is confusing me - a failover or restart of the VM resolves the issue. Whilst I need to work out the underlying issue that is causing the NICs to hang I'm also concerned that the VM didn't failover to another node which would have solved the outage for me. Is there a way to configure the cluster so that it can tell that the VM guest has lost connectivity and fail it over? As things stand the cluster is assuming that the VM is running happily as I presume Hyper V says everything is great even though there is a problem.

    Read the article

  • Problem recreating BCD on Windows 7 64bit - The requested system device cannot be found

    - by Domchi
    NVIDIA drivers upgrade crashed my Windows 7 installation, so I'm working to undo the damage. What I can do: I can boot Windows install from the USB drive, and I can boot the Hiren's Boot CD. Although automated Windows repair fails, I can get to command prompt when I boot Windows install from USB drive, and I can see my drive and all my data. What I cannot do: I cannot boot into Windows - I get this message: Windows failed to start. A recent hardwware or software change might be the cause. To fix the problem: 1.insert windos cd and run a repair your computer option. File: /boot/bcd Status: 0xc000000f Info: an error occured while attempting to read the boot configuration data. It seems that something is wrong with my /Boot/BCD, so I'm trying to recreate it from scratch. I've tried all the methods detailed here (including Windows repair which fails), and I'm left with the last one (near the bottom of that page). When I type the following command as in the tutorial: bcdedit.exe /import c:\boot\bcd.temp ...it fails with the following error: The store import operation has failed. The requested system device cannot be found. Many Google results say that I must use diskpart to set my partition active, however it's already set as active. Also, when I try this: bcdedit /enum It fails with similar message: The boot configuration data store could not be opened. The requested system device cannot be found. Does anyone know what does that error message mean, and what is the requested system device? I'd like to avoid having to reinstall Windows since all the files on disk seem to be fine.

    Read the article

  • Put one monitor of a dual monitor windows system into standby

    - by Psycogeek
    Standby not Disabled! When running 2 monitors on windows 7 or Windows XP, I would like to be able to put one of the monitors at a time into standby. The method can be manual. When running 2 monitors , the second monitor is not always needed, shutting off the monitors own power switch will turn off the monitor, that does work Ok. Problems with that are , the delay with the monitor logo at turn on, and the power switch is not very accessable, and the switch might not live forever turning it on and off so many times. Using disable methods like devcon, WIN-P and Display, causes all the windows to properly move to the other monitor. While that is what a person would want to happen so they can get hold of the windows, that is not what I want to happen, and some things on the other monitor have to be re-arranged after a re-enable. By putting it into standby mode, nothing changes other than the monitor going into standby. Disconnecting the DVI cable still can cause the system to (properly) shift all the windows over to the one monitor, just like any of the disable methods do. That makes a mess of the windows, and is so unacceptable, that I would prefer to leave the monitor on, wasting power and the hardware, when it could easily go into standby for some time. For both monitors I am using a "MonitorOff" program that puts both monitors into standby, but I can not find a utility that will put only ONE monitor into standby for the windows system. If someone comes along and suggests "ultramon" you must know for a fact that it will put One of either of the monitors into actual standby. And it does not really suit me to use ultramon, I tested it (it was nice) and I did not feel that it was a program I wanted. The 2 monitors are running off of an ATI 4890 card, they are both hooked up DVI-I, the OS is both Windows 7 (primary) and Windows XP. In addition it would also be interesting to have seperate standby activity timers, and follow mouse kind of standby changes, but any manuel method , shortcut, batch , tray, or gadget kind of operation would be a good start.

    Read the article

  • socket problem with MySQL

    - by Hristo
    This is a recent problem... MySQL was working and a couple of days ago I must have done something. I deleted MySQL and tried reinstalling using the .dmg file. The mysql.sock file never gets created and I get the following error messages: Hristo$ mysql Enter password: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/mysql/mysql.sock' (2) I also tried stopping Apache and installing but Apache gave me an error... I don't know if this is good or bad: Hristo$ sudo apachectl stop launchctl: Error unloading: org.apache.httpd I tried the MacPorts installation as well but the socket file still didn't get created. I don't really know what to do and I don't want to reinstall Snow Leopard and start from scratch :/ I also tried installing the 32-bit version and same deal. No luck. Finally... I tried doing the source installation but when I get to the configuration step, I get the following error: -bash: ./configure: No such file or directory The file is "mysql-5.1.47-osx10.6-x86_64.tar.gz" so I think it is the proper file for source installation and yes I have a 64 bit system. I don't know what to do anymore. Any ideas? Thanks, Hristo

    Read the article

  • Weird mouse/keyboard freezups when using PowerPoint 2007 with IBM/Lenovo docking station

    - by DanM
    I'm not sure what part of my system is responsible for this, but when using PowerPoint, I have problems when trying to resize drawing objects. I'll be dragging the handle and suddenly, the object will deselect and whatever is behind the object will select and start moving around. Next thing I know, the keyboard won't type anymore, and the only way to fix it is to unplug the USB and plug it back in. In case it's hardware related, I'm using an IMB Thinkpad T60P in a docking station. My keyboard is a Microsoft Natural Keyboard Pro. My OS is Windows XP SP3. I've never noticed this happening in anything besides PowerPoint, and I don't know anyone else who has this problem (even people with similar setups). Any ideas what it could be? Edit Well, it looks like I only get the problem if I plug the mouse into my docking station's USB. If I plug directly into the laptop's USB, everything works fine. And, again, this problem is only with PowerPoint. I tried playing with some drawing objects in Word and had no issue no matter where my mouse was plugged in. I should also mention I tried a different mouse (a standard Microsoft corded mouse instead of my Logitech trackball), but that made no difference. So, I don't think it's anything specific with the trackball or the trackball's driver. I tried searching Google but came up empty, so I'm guessing this problem is something unique to my setup. If you have any thoughts or ideas to try, I'd love to hear them.

    Read the article

  • Latency between IIS and SQL on same physical, two VMs

    - by Jerad Rose
    I have a single server (2x4 core CPUs, 32GB ram), that is a Windows Server 2012 Hyper V host, and it hosts two guest VMs (also Windows Server 2012 instances). One of them is a web server, the other is a SQL server. When hitting a page that loops over 50 records, there is noticeable latency. I capture/report the timings of each iteration on the loop, and each iteration is about 20-30 milliseconds. Of course, this amounts to over a second of latency for the whole loop. I thought maybe SQL needed to be tuned, but running profiler on it, the queries are showing almost 0 duration, so it seems the bottleneck is in transit between the two VMs. I have both VMs configured to use the actual NIC (vs. using a VNIC), so maybe that's part of my problem. Also, this is a classic ASP site, so it's using the SQL OLE DB provider, and I'm wondering if that is part of the problem. This is a new server setup, from an existing Windows 2003/IIS6 server setup where both web and DB run on the same server instance (no virtualization). On that setup, there is no such latency when looping over the cursor like this. But there are so many variables, I'm not sure where to start ruling things out.

    Read the article

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • SQL Server instance shuts down

    - by user42119
    I'm currently developing a new ASP.NET project hosted on a Windows Server 2008 RC2 with an SQL Server 2008 Express database. I have three SQL Server instances (for different purposes) running which currently all contain a single database. For apparently no reason, these instances tend to shut down after some days. There might be low or no traffic to these instances, because there might be some days in a row, where I can't develop. It now occurred several times, that one or two of these three instances just shut down, so that I can't access the database, without manually starting the instance. I can't seem to find a event log entry for the shutdown, which is most likely because I just enabled logging (why is the default setting off?). So the questions are: * Why does an SQL Server instance shut down? (Is there such a thing as a "Shut down instance after 3 days of inactivity"? * How can I achieve that the instances are running 24/7? Edit: I solved this problem by writing my own application that checks for the status of the SQL Server services. My program will start via a batch file, that gets called by the Windows Task Scheduler every 5 minutes.

    Read the article

< Previous Page | 850 851 852 853 854 855 856 857 858 859 860 861  | Next Page >