Search Results

Search found 19615 results on 785 pages for 'apache config'.

Page 398/785 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Fedora 14 update probelem

    - by Marko
    How is everybody doin? :) Im having this problem with Fedora 14 update for last couple of weeks.. when I run yum update I get the following result: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: kernel-uname-r = 2.6.32.10-90.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-1:195.36.15-1.fc12.1.i686 kernel-uname-r = 2.6.32.16-150.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-1:195.36.31-1.fc12.2.i686 kernel-uname-r = 2.6.32.21-168.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-1:195.36.31-1.fc12.5.i686 kernel-uname-r = 2.6.32.10-90.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-1:195.36.15-1.fc12.1.i686 kernel-uname-r = 2.6.32.16-150.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-1:195.36.31-1.fc12.2.i686 kernel-uname-r = 2.6.32.21-168.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-1:195.36.31-1.fc12.5.i686 Please report this error in http://yum.baseurl.org/report ** Found 9 pre-existing rpmdb problem(s), 'yum check' output follows: VirtualBox-3.2-3.2.10_66523_fedora13-1.i686 has missing requires of libpython2.6.so.1.0 VirtualBox-3.2-3.2.10_66523_fedora13-1.i686 has missing requires of python(abi) = ('0', '2.6', None) 1:kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-195.36.15-1.fc12.1.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.10', '90.fc12.i686.PAE') 1:kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-195.36.31-1.fc12.2.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.16', '150.fc12.i686.PAE') 1:kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-195.36.31-1.fc12.5.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.21', '168.fc12.i686.PAE') mysql-workbench-gpl-5.2.28-1fc13.i386 has missing requires of libpython2.6.so.1.0 pysvn-1.7.2-1.fc13.i686 has missing requires of python(abi) = ('0', '2.6', None) system-config-display-2.2-1.fc12.i686 has missing requires of libpython2.6.so.1.0 system-config-display-2.2-1.fc12.i686 has missing requires of python(abi) = ('0', '2.6', None) does anybody have a similar issue?

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • lighttpd on Fedora permission issues

    - by Isaac Gateno
    I'm trying to get started with lighttpd on Fedora 16 to run a RESTful api for development. Right now even with the most basic sample config file I'm getting 404 pages when I know the pages I'm pointing at exist. From reading other questions I'm leaning towards this being a permissions issue, but I'm confused about how lighttpd runs on Fedora. There's a user called "lighttpd" not "www-data"? I can't see this user in the system-config-users tool and I can't su into it to check which permissions it has. I'm trying to point lighttpd to "/var/www/lighttpd" which has some example pages in it. The permissions for the files inside are set to -rw-r--r-- and the permissions for the folder containing them are drwxr-xr-x. Doesn't that mean that any user can view these files? I'm not sure what else I should be checking as I don't have much experience with server configuration. Any help would be appreciated. Edit: I was following the tutorial configuration here so the lighttpd.conf file contains server.document-root = "/var/www/lighttpd/" server.port = 3000 mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) and I was just trying to get the basic example page working.

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • deploy git project and permission issue

    - by nixer
    I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook. I have next hook content echo "starting deploy..." WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/" GIT_WORK_TREE=$WWW_ROOT git checkout -f exec chmod -R 750 $WWW_ROOT exec chown -R www-data:www-data $WWW_ROOT echo "finished" hook can't be finished without any error message. chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted means that git has no enough right to make it. The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data. What changes I should do, to work properly post-receive hook in git, and redmine with repository ? what I did, is: # id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite) # id gitolite uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data) does not helped. I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path) what should I do ?

    Read the article

  • How to safely use grub rescue> in Fedora 16? System does not boot anymore

    - by YumYumYum
    When i boot my PC, i get this in my Fedora 16 distro. I have tried as following but none allowing me to boot anymore. Any help please? I am blocked completely. Grub loading. Welcome to GRUB! error: file not found. Entering rescue mode... grub rescue> grub rescue> ls (hd0) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1) grub rescue> ls (hd0,gpt2)/ ./ ../ lost+found/ memtest86+-4.20 grub2/ System.map-3.1.0-0.rc3.git0.0.fc16.i686 config 3.1.0.0.rc3.git0.0.fc16.i686 grub/ vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 elf-memtest86+-4.20 initramfs-3.1.0.0.rc3.git0.0.fc16.i686.img initramfs-3.1.0.0.rc4.git0.0.fc16.i686.img System.mpa-3.1.0.0.rc3.git0.0.fc16.i686 config-3.1.0.0.rc3.git0.0.fc16.i686 vmlinuz-3.1.0.0.rc3.git0.0.fc16.i686 grub rescue> set prefix=(hd0,gpt2)/boot/grub grub rescue> set root=(hd0,gpt2) grub rescue>insmod normal error unknown filesystem. or sometimes "error: file not found." grub rescue>normal unknown command normal

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • 100% CPU when doing 4 or more concurrent requests with Magento

    - by pancake
    Currently I'm having trouble with a server running Magento, it's unbelievably slow. It's a VPS with a few Magento installations on it used for development, so I'm the only one using them. When I do 4 request all 2 seconds after each other I'm finished in 10 seconds. Slow, but still within the limits of my patience. When I do 4 "concurrent" requests, however (opening 4 tabs in a row, very quickly) all four cores go to 100% and stay there for like a minute. How is this possible? I know that there are a lot of possibilities here, so any tips on how to make an Apache/PHP server go faster are also welcome. It used to go a lot faster before, and I've also tried APC, but it kept causing problems (PHP errors, something with memory pools) so I've disabled it. By the way, the Magento cache is off and compiling is also off. I know this makes Magento slower than usual, but I don't think a 60 second response time is normal for any Magento installation. Virtual hardware: 4 Cores and 4096MB RAM Swap is never used (checked with htop) 100GB disk space, of which 10% is in use Software: Debian 6 DirectAdmin and apache custombuild PHP 5.2.17 (CLI) If you need more info, please tell me how to get it, because I probably don't know how. I do know how to use the command line in linux and the usage of quite a few commands, but my experience with managing a server is limited.

    Read the article

  • Ubuntu + LigHTTPd: Server requests taking ages

    - by ctrl_freak
    I've had an issue since upgrading my distro a couple of weeks ago from hardy; receiving data after making a request has increasing intervals of nothing, as you can see from the picture below. http://i49.tinypic.com/2w5lvr9.png I have since reinstalled fresh from an Ubuntu 10.04 Server (i386) disk, but am still having the same issues. I'm running on a LigHTTPd, MySQL, PHP5 stack. The surprising thing is, that local browsing using lynx is super fast, as expected. Initially, after reinstalling, I copied over the old configuration files from the previous installation, but have since reinstalled LigHTTPd and rebuilt the config file from scratch. The only correlation I could find, was that I attempted installation of ionCube and Zend Optimizer for a script I was testing, however I would think that it could no longer impact seeing I had reinstalled the OS. I have also removed Suhosin just in case, however it had no impact. I'm thinking it possibly has something to do with networking, but I wouldn't know where to start. The server is manually assigned an IP by it's MAC address on the router. The fact that the time seems to be exponential (to a point) worries me. I've tried strace'ing the LigHTTPd and MySQL processes, however I couldn't see anything obvious, not that I'd really know what I'm looking for. RAM and CPU usage don't seem to be out of the ordinary, but I can't say its perfect.. I'm hoping someone has experienced the same, or can point me in a direction, as searching has proved fruitless as I don't know anything specific. Config files can be posted, if requested.

    Read the article

  • PHP unable to allocate memory.

    - by AlReece45
    On my way to the office this morning, every website on our shared VPS started giving the same error (several times, not the typical memory_limit error which is fatal): Warning: Unknown: Unable to allocate memory for pool. in Unknown on line 0 The shared server is a 64-bit OpenVZ container running cPanel. There are only ~6 VPSes on the host-- this is the largest one at only 4GB. The host itself has 24GB RAM. As the below graphs show, the memory usage on the host and VPS are both rather low. CPU Usage/Disk/Host all seem to be normal. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. Apache 2.2, PHP 5.2 (mod_php) Restarting Apache has corrected the problem for now. However, I'd like to prevent it from happening again and I'm not sure what was limiting the memory. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. There's seems to be plenty of memory: what caused this error? VPS Memory Usage Host Memory Usage APC Information apc.ttl=0 apc.shm_size=0 apc.mmap_file_mask=(blank) 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking)

    Read the article

  • ssh Password-less login to multiple machines when you already have one

    - by tandu
    I'm a little bit confused about setting up a password-less login for multiple machines to begin with, but I think I could do it from scratch. The problem is I already have it set up for one machine and I don't want that to be blown away when I try to set it up for the other machine. Let's clarify: Machine A: the machine I'm connecting from Machine B: the machine I'm connecting to. Password required Machine C: the machine I'm connecting to. Password-less ssh I have read some tutorials on setting up password-less ssh to a certain site, but they usually start with "move id_rsa out of the way so it doesn't get blown away," but then at the end of the tutorial it's not moved back. If I had no help at all, here is what I would do: Log into B ssh-keygen -t rsa -f ~/id_rsa.other scp id_rsa.other.pub A:~/.ssh echo "Host A \n Identity File ~/.ssh/id_rsa.other" > ~/.ssh/config (Note that I realize these commands may not be exactly correct, but this is just the idea). What I'm not quite clear on is if I need to update the config for A, B, or both. I'm fairly certain to do a password-less login from A to B, it is A that needs the public key .. but I also suppose I need B to use the correct id_rsa file for that public key. Finally, I don't want the password-less login for C to be affected at all .. it's using id_rsa. Am I going wrong anywhere?

    Read the article

  • How do I configure Jetty (via jettyrunner) so that it names a character set in the Content-Type response header?

    - by Pointy
    I use Jetty (via the oh-so-handy Jetty Runner) for day-to-day web application testing. One thing I've recently stumbled on is the fact that I don't get a character set called out in the "Content-Type" response header all the time. I do get it in response to my application's XMLHttpRequest transactions, but not for plain old pages loaded by <a> links or whatever. I've read a little bit about how to set up a Jetty config file, but I've never been able to completely understand that; all servlet containers are complicated, and while Jetty is pretty simple it's just weird enough that I don't grok the overall idea. Thus, all I do to launch my app is to run the Jetty Runner .jar file with a couple of simple arguments to set up the port number and logfile path, and then I just give it the .war file to run. It works great — except for the missing character set :-) Anybody have a quick sample config file that might fix this? edit — oh if it matters, I'm running Jetty 7.0.0 RC3; I've also tried with a slightly newer version (still 7.something) with exactly the same issue. All my testing is on Ubuntu.

    Read the article

  • sudoer scheme to allow useful access to another web developer yet retain future control of a virtual

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, so standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done. I'm sure that people have solved this type of problem before somehow, though, and I'd like to go with something somewhat tested as opposed to something I've homegrown.

    Read the article

  • mod_perl custom configuration directives don't work when placed in .htaccess and there is <Location>

    - by al_l_ex
    I'm trying to complete Redmine's feature request #2693: Use Redmine.pm to authenticate for any directory (1). I have not much knowledge on all these things and need help. Redmine uses mod_perl module Redmine.pm for authentication & authorization. This module defines several custom configuration directives. I've successfully modified patch from (1) and it works when all config is in <Location>: <Location /digischrank/test> AuthType basic AuthName "Digischrank Test" Require valid-user PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=SomedaTaBAse;host=localhost" RedmineDbUser "SoMeuSer" RedmineDbPass "SomePaSS" RedmineProject "digischrank" </Location> But when I move one of these directives (RedmineProject, see (1)) in .htaccess file, Redmine.pm doesn't see it! I've tried to change <Location> to <Directory> and add AllowOverride All. Directives from .htaccess is visible, but remaining ones from <Directory> - not. I don't want to move all directives to each .htaccess. When I add <Location> in addition to <Directory>, again - only directives from <Location> are visible. As far as I know, directives should be merged. I miss something?

    Read the article

  • PHP-FPM processes holding onto MongoDB connection states

    - by Brendan
    For the relevant part of our server stack, we're running: NGINX 1.2.3 PHP-FPM 5.3.10 with PECL mongo 1.2.12 MongoDB 2.0.7 CentOS 6.2 We're getting some strange, but predictable behavior when the MongoDB server goes away (crashes, gets killed, etc). Even with a try/catch block around the connection code, i.e: try { $mdb = new Mongo('mongodb://localhost:27017'); } catch (MongoConnectionException $e) { die( $e->getMessage() ); } $db = $mdb->selectDB('collection_name'); Depending on which PHP-FPM workers have connected to mongo already, the connection state is cached, causing further exceptions to go unhandled, because the $mdb connection handler can't be used. The troubling thing is that the try does not consistently fail for a considerable amount of time, up to 15 minutes later, when -- I assume -- the php-fpm processes die/respawn. Essentially, the behavior is that when you hit a worker that hasn't connected to mongo yet, you get the die message above, and when you connect to a worker that has, you get an unhandled exception from $mdb->selectDB('collection_name'); because catch does not run. When PHP is a single process, i.e. via Apache with mod_php, this behavior does not occur. Just for posterity, going back to Apache/mod_php is not an option for us at this time. Is there a way to fix this behavior? I don't want the connection state to be inconsistent between different php-fpm processes.

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • HTTPS Proxy which answers CONNECT with own certificate

    - by user1109542
    I'm configuring a DMZ which has the following Scheme: Internet - Server A - Security Appliance - Server B - Intranet In this DMZ I need a Proxy server for http(s) connections from the Intranet to Internet. The Problem is, that all Traffic should be scanned by the Security Appliance. For this I have to terminate the SSL Connection at Server B, proxy it as plain http to Server A through the Security Appliance and then further as https into the Internet. An encryption is then persistent between the Client and Server B and the Target Server and Server A. The communication between Server A and Server B is unencrypted. I know about the security risks and that the client will see some warning about the unknown CA of Server B's certificate. As Software I want to use Apache Web Servers on Server A and Server B. As first step I tried to configure Server B that it serves as endpoint for the SSL Encryption. So it has to establish the encryption with the client (answering HTTP CONNECT). Listen 8443 <VirtualHost *:8443> ProxyRequests On ProxyPreserveHost On AllowCONNECT 443 # SSL ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProxyEngine on SSLProxyMachineCertificateFile /etc/pki/tls/certs/localhost_private_public.crt <Proxy *> Order deny,allow Deny from all Allow from 192.168.0.0/22 </Proxy> </VirtualHost> With this Proxy only the CONNECT request is passed through and an encrypted Connection between the client and the target is established. Unfortunately there is no possibility to configure mod_proxy_connect to decrypt the SSL connection. Is there any possibility to accomplish that kind of proxying with Apache?

    Read the article

  • HPET missing from available clocksources on CentOS

    - by squareone
    I am having trouble using HPET on my physical machine. It is not available, even though I have enabled it in my bios, forced it in grub, and triple checked my kernel to include HPET in its compilation. Motherboard: Supermicro X9DRW Processor: 2x Intel(R) Xeon(R) CPU E5-2640 SAS Controller: LSI Logic / Symbios Logic SAS2004 PCI-Express Fusion-MPT SAS-2 [Spitfire] (rev 03) Distro: CentOS 6.3 Kernel: 3.4.21-rt32 #2 SMP PREEMPT RT x86_64 GNU/Linux Grub: hpet=force clocksource=hpet .config file: CONFIG_HPET_TIMER=y CONFIG_HPET_EMULATE_RTC=y CONFIG_HPET=y dmesg | grep hpet: Command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet Kernel command line: ro root=/dev/mapper/vg_xxxx-lv_root rd_NO_LUKS rd_LVM_LV=vg_xxxx/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_xxxx/lv_swap rd_NO_DM LANG=en_US.UTF-8 rhgb quiet panic=5 hpet=force clocksource=hpet cat /sys/devices/system/clocksource/clocksource0/current_clocksource: tsc cat /sys/devices/system/clocksource/clocksource0/available_clocksource: tsc jiffies What is even more confusing, is that I have about a dozen other machines that utilize the same kernel .config, and can use HPET fine. I fear it is a hardware issue, but would appreciate any advice or help with getting HPET available. Thanks in advance!

    Read the article

  • glassfish timeout

    - by Stefano
    Environment: Windows 2008 Server Edition Netbeans 6.7.1 Glassfish 2.1 Apache 2.2.15 for win32 Original problem (almost fixed): The HTTP/1.1 GET method to send data fails if I wait for more than 30 seconds. What I did: I added to the http.conf file of Apache these lines: # # Timeout: The number of seconds before receives and sends time out. # Timeout 9000 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On I went to the Glassfish panel (localhost:4848) and in Configuration HTTP services and I put: Timeout request: 9000 seconds (it was 30) Standby time: -1 (it was 30 seconds) Problem: I am not able to put for glassfish a timeout bigger than 2 minutes to send a GET method. I found this article about glassfish settings, but i'm not able to find WHERE I should put those parameters, and if they could work. Can anybody help try to set this timeout to a higher limit? Maybe it's even a different setting? New tried solution: I went to the glassfish panel control, and to Configuration Subprocesses "Thread-pool-name" and changed the idle timeout from 120 seconds to 1200 seconds. Then I restarted the glassfish service (both from the administrative tools and from asadmin), but still it waits 120 seconds to go idle. I even tried restarting the whole server, still no results. Maybe some setting in postgres? Or the connection of netbeans to postgres through glassfish?

    Read the article

  • Tuning Linux + HAProxy

    - by react
    I'm currently rolling out HAProxy on Centos 6 which will send requests to some Apache HTTPD servers and I'm having issues with performance. I've spent the last couple of days googling and still can't seem to get past 10k/sec connections consistently when benchmarking (sometimes I do get 30k/sec though). I've pinned the IRQ's of the TX/RX queues for both the internal and external NICS to separate CPU cores and made sure HAProxy is pinned to it's own core. I've also made the following adjustments to sysctl.conf: # Max open file descriptors fs.file-max = 331287 # TCP Tuning net.ipv4.tcp_tw_reuse = 1 net.ipv4.ip_local_port_range = 1024 65023 net.ipv4.tcp_max_syn_backlog = 10240 net.ipv4.tcp_max_tw_buckets = 400000 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 40000 net.ipv4.tcp_rmem = 4096 8192 16384 net.ipv4.tcp_wmem = 4096 8192 16384 net.ipv4.tcp_mem = 65536 98304 131072 net.core.netdev_max_backlog = 40000 net.ipv4.tcp_tw_reuse = 1 If I use AB to hit the a webserver directly I easily get 30k/s connections. If I stop the webservers and use AB to hit HAProxy then I get 30k/s connections but obviously it's useless. I've also disabled iptables for now since I read that nf_conntrack can slow everything down, no change. I've also disabled the irqbalance service. The fact that I can hit each individual device with 30k/s makes me believe the tuning of the servers is OK and that it must be some HAProxy config? Here's the config which I've built from reading tuning articles, etc http://pastebin.com/zsCyAtgU The server is a dual Xeon CPU E5-2620 (6 cores) with 32GB of RAM. Running Centos 6.2 x64. The private and public interfaces are on separate NICS. Anyone have any ideas? Thanks.

    Read the article

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • haproxy + nginx: https trailing slashes redirected to http

    - by user1719907
    I have a setup where HTTP(S) traffic goes from HAProxy to nginx. HAProxy nginx HTTP -----> :80 ----> :9080 HTTPS ----> :443 ----> :9443 I'm having troubles with implicit redirects caused by trailing slashes going from https to http, like this: $ curl -k -I https://www.example.com/subdir HTTP/1.1 301 Moved Permanently Server: nginx/1.2.4 Date: Thu, 04 Oct 2012 12:52:39 GMT Content-Type: text/html Content-Length: 184 Location: http://www.example.com/subdir/ The reason obviously is HAProxy working as SSL unwrapper, and nginx sees only http requests. I've tried setting up the X-Forwarded-Proto to https on HAProxy config, but it does nothing. My nginx setup is as follows: server { listen 127.0.0.1:9443; server_name www.example.com; port_in_redirect off; root /var/www/example; index index.html index.htm; } And the relevant parts from HAProxy config: frontend https-in bind *:443 ssl crt /etc/example.pem prefer-server-ciphers default_backend nginxssl backend nginxssl balance roundrobin option forwardfor reqadd X-Forwarded-Proto:\ https server nginxssl1 127.0.0.1:9443

    Read the article

  • Outbound ports to allow through firewall - core requirements

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support email and web access, which I know everyone will need: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working? I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Linksys SFE2000 Interface

    - by boburob
    I have a real problem with a layer 3 Linksys switch. First, every time it looses power, it seems to reset back to an older config. Not only this, but when this happens it looses interface settings on one subnet. This would not be a problem but I am completely unable to get to the interface on the working side. It allows me to log on and then just displays a blank screen. I have tried this on: IE6 IE7 IE8 IE9 Firefox 3.5 Opera Chrome All with the same results, except for Opera, which loads half the interface but nothing I can really use. I really need to get onto this switch so I can sort out routing and VLAN tagged ports, so if anyone has any ideas on either of these issues please let me know ASAP! Thanks! Also, due to its location and my lack of laptops with serial connections I cannot putty into it. UPDATE: Looked into this a bit more and it looks like this model of switch does not save the current config to boot unless you make sure to save it yourself, which explains the first issue, however the broken interface is more worrying!

    Read the article

  • How to set up a server without a hosting control panel

    - by A4J
    I have always used a control panel on my dedicated servers - from cPanel to Plesk to Virtualmin, and I am now considering ditching a CP altogether and manually editing config files. My requirements are fairly simple, I will host multiple sites on the server; some Apache with PHP & Mysql and some Passenger with Rails & Postgres. All will require email smtp/pop. FTP/Stats will not be required. Could someone please give me a quick run-down of what I would need to do - in terms of installing software and configuration? My server will come with a base install of CentOS 6.4 minimal. My thoughts so far: Install/update latest versions of MySQL & Postgres (are they 'safe' out of the box? Or do I need to do anything else like set up root passwords etc?) Install Apache & PHP (again, are the base installs good to go or do they require security tweaks?) Set up nameservers/hostnames/reverse DNS etc (Any guides on how to do this please?) Install Rubygems Install and configure Dovecot and Postfix (any tips on doing this? Or links to how-tos that cover it please?) Set up each website - any links to guides on how to do this? Install/configure firewall (or is the default install good to go?) Any other tips or advice would be greatly appreciated, as would links to guides or how-tos.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >