Search Results

Search found 10931 results on 438 pages for 'struts config'.

Page 241/438 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • NodeJS Supervisord Hashlib

    - by enedebe
    I have an problem with my NodeJS app. The problem is the include of the library Hashlib I've followed more than 10 times the instructions to install. Get a clone of the repo, do make and make install. NodeJS is installed in default path, and that's the tricky point: When I launch node app.js it works, perfectly. The problem starts when I configured my Supervisord to run with the same user, with the same config file as I have in other systems working, and I get that NodeJS can't find hashlib. module.js:337 throw new Error("Cannot find module '" + request + "'"); ^ Error: Cannot find module 'hashlib' I'm getting crazy, what can I do?! Why my user launching node from the console works great, but not the supervisord? Thanks!

    Read the article

  • after redmine install i see only the filesystem

    - by derty
    After installing redmine, i cann only access the filesystem! I reinstalled redmine 2-3 times in different ways. Used this "how to"s: http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_using_Debian_package http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_210_on_Debian_Squeeze_with_Apache_Passenger http://beeznest.wordpress.com/2012/09/20/installing-redmine-2-1-on-debian-squeeze-with-apache-modpassenger/ the webserver of 10.0.0.14 is going to be behind a reverse apache proxy. but for know i'm working directly in the system. This change wouldn't be a problem. I use this on a bunch of other services. The Database does exist and i can enter it. The configuration file config/database.yml is set up right, with the data i use to enter as redmineuser. So does one have an idea why it is not working like i wish?

    Read the article

  • Apache server as reverse proxy is removing xmlns info from html tag

    - by Johnco
    I have a Java application running in tomcat, in front of which I have an Apache http server as a reverse proxy. However, the proxy is removing all xmlns data from the html tag, which breaks all the Facebook's FBML which is never parsed. My current config is as follows: ProxyRequests off ProxyHTMLDocType XHTML ProxyPassReverseCookiePath /cas / <Location /> ProxyPass http://localhost:8080/cas ProxyPassReverse http://localhost:8080/cas </Location> ProxyHTMLURLMap /cas / SetOutputFilter proxy-html <Proxy *> Order deny,allow Allow from all Satisfy all </Proxy> Thanks in advance.

    Read the article

  • Apache2 proxypass

    - by gatsby
    i'm trying to figure out why my apache2 reverse proxy doesn't work... hope someone can clarify. i'm using an apache server as a gateway with proxy pass: 10.184.1.2 is the IP. these are PP instructions i inserted in the 000-default config file. ProxyPass / http://192.168.102.31/ ProxyPassReverse / http://192.168.102.31/ the host 192.168.102.31 is an internal IP of a subnet wich is not reachable directly by clients, but only by the apache gateway. when i try to access such a address: http://apache_gateway_name/dir i see the client trying to reach 192.168.102.31 address and of course timeout occurs. can someone help? Best regards

    Read the article

  • HP Blade ILO not responding in chassis ILO

    - by bobinabottle
    I have just started at a new company and I am inspecting their current server config. The HP 480c blades in a c7000 chassis aren't responding to ILO, although the chassis ILO is working fine. I have a feeling the last sysadmin configured the blades ILO as static IPs and it is not responding correctly. The servers are sitting in a datacenter and I'm hoping to be able to fix this remotely. Is there a way that I can change the ILO static IPs for the blades remotely? If not and I do have to go onsite, how do I change the IP addresses of the ILO for the blades? (Sorry I'm not very familiar with HP servers) thanks for you help!

    Read the article

  • run two apache servers on one computer

    - by harry_T
    I would like to run two XAMPP apache servers and mysql on one Windows computer. My first idea was to run one under directory XAMPP, the other under XAMPP_B. Why you ask? I have two applications that have to be in the "root" directory of localhost. Both servers do not have to be active at same time, so I don't think I will have any conflicts I will have to modify my.cnf in mySQL httpd.conf, apache_start and maybe other config files as well. Or maybe someone can suggest a better way...

    Read the article

  • Grub loading. The symbol ' ' not found. Aborted. Press any key...

    - by John
    Hi there, I have a dual boot system on dell xps 9000 with windows 7 and ubuntu. But after I performed system backup on it as requested by windows 7 I am no longer able to boot into the computer, instead at the beginning after bios I get the following message: Grub loading. The symbol ' ' not found. Aborted. Press any key... I tried to change bios booting config to starting with harddrive and it still returned the same message. Using windows boot disk only asks me to do another system backup or threatens to delete my harddrive completely. The only solution I have so far is to reinstall ubuntu, but that leaves 2 additional copies of ubuntu on my computer. Is there a simpler way to fix the situation so I can actually boot into windows? Thanks so much.

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high.

    - by Gk
    Hi, I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • My php homepage downloads index.php instead of being processed on Gandi.net

    - by alekone
    if I go to the homepage of my website http://www.website.com (on a brand new server) the index.php gets donwloaded instead than processed. I don't have the same problem on other folders. my .htaccess reads: AddHandler php5-script .php what could this be? I suspect it's something with php config / or htaccess, but I'm not able to figure it out. help please! edit: I don't know if this helps: it's a wordpress installation, I have this problem only on the public part of the website, not on the admin (that renders correctly)

    Read the article

  • Which are the most important directories to backup on a Linux server?

    - by QAH
    Hello everyone! I'm running an Ubuntu 9.10 Linux server. I'm trying to find a way to backup the machine while it is running and from what I see, this eliminates the disk clone utilities. All of the disk clone stuff I have seen for Linux requires that you reboot into a special live CD. So my question is this, what is the best solution for backing up the system while it is running? Also, I don't really care about the OS config too much, I just want to be able to keep my stored files and my programs that I have installed on it. Thanks

    Read the article

  • How to get the Three.js import/export scripts into Blender on Ubuntu?

    - by Bane
    I have been working with 3D primitives in Three.js, but now I want to import some models. I plan on using Blender, which I have just downloaded with: sudo apt-get install blender However, I was instructed to put the import/export scripts in the .blender/2.62/scripts/addons folder, but it does not exist! .blender/2.62 does exist, but it only has a config folder. The next thing I did is manually changed the script search path in Blender's preferences from // to my homefolder/scripts, which contained the required io_mesh_threejs folder (which, in turn had the .py scripts inside). I saved the changes, restarted Blender, but still nothing: in the menu there is no mention of Three.js at all! What do I do? It would be great if I knew the installation path for Blender, because maybe I could put those scripts there manually. Where should it be installed? EDIT: these are the scripts I'm talking about, along with the instructions: https://github.com/mrdoob/three.js/tree/master/utils/exporters/blender.

    Read the article

  • Ubuntu and mysql server. Something isnt allowing me to connect

    - by acidzombie24
    I have a question about mysql settings http://serverfault.com/questions/94054/remote-connections-and-mysql-on-ubuntu/94088#94088 now i want to figure out why i cannot connect. I made sure bind-address was commented out. I can ping the server within the VM but i cannot ping it from within the VM using mysqladmin --protocol=tcp --host=self_ip ping. I also followed along and check if my ports were open and they look like they are. I setup samba on that VM and can access that with no problem as well. It looks like ubuntu does not have a firewall either (i figured this out before) so i am stumped why the server isnt allowing my connection. Apparently the config file works on another person side http://www.pastie.org/742545 I am using Ubuntu 6.06 LTS just because of 'support' reasons. So hopefully this will be 'easy'?

    Read the article

  • Prevent Internet Explorer 8 from exiting when closing the last tab

    - by LongTTH
    Firefox doesn't exit when I close the last tab, just by doing some customization in about:config. But now, my company has to work with some website that only works well on Internet Explorer. I currently use Internet Explorer 8 on Windows XP SP3. So, how do I prevent Internet Explorer 8 from exiting when closing the last tab? I've searched for such a feature for a while but found nothing helpful. FYI I gave up with my "customization habit", now I'm trying to live with IE world, ... phew...

    Read the article

  • shared web hosting architecture in a university setting

    - by gaspol
    We're in the process of creating a shared webhosting infrastructure for our university. Departments within the university can host their sites on this infrastructure. We're thinking of setting up multiple, load balanced web servers attached to shared storage (for web content and Apache config files). There will also be database servers behind these web servers. Does anyone have any other suggestions about this? Any recommendations for an alternative setup? Would having cPanel/WHM/Plesk be a good idea to automate account creation/maintenance?

    Read the article

  • Sharing an external hard drive in Ubuntu using Samba

    - by cambraca
    /media/MYDISK is where my hard drive is mounted automatically. I created a symlink using: ln -s /media/MYDISK /home/camilo/MYDISK chmod 777 /home/camilo/MYDISK I'm setting up smb.conf like this: [myshare1] comment = external disk browsable = yes path = /home/camilo/MYDISK guest ok = yes read only = no create mask = 0775 Also, in the [global] section I tried adding the following lines: follow symlinks = yes wide links = yes unix extensions = no The problem is that when browsing the shared folder in Windows 7, I get a "\\etc\myshare1 is not accessible" error. When pointing the path to a regular folder it works fine. Also, when I point it directly to /media/MYDISK, it shows the same error. EDIT: to make it more interesting, I have no graphical interface, so I need to touch the config files directly..

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • How to get Atheros ar242x wireless adapter working under Debian Linux?

    - by Mark
    Does anybody know how to get the Atheros ar242x wireless adapter working under Debian Linux (5.0.2 and/or 5.0.3)? My Debian live CDs and install CDs both don't like this card at all. Curisouly, it seems to work on other, Debian-based, Linuxes. Is this a free/non-free Driver issue? I know Debian gets mardy about that. Although for what it's worth, the Live CD doesn't seem to detect my wired LAN connection either... Specifically this is on a Samsung R610 laptop (some version of which seem to have an intel wireless adapter - this one definitely doesn't!) I've tried all sorts of things but obviously on a live CD installing software is limited. I've also tinkerering with network config files and kernel modules etc but to no avail.

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • Routing to various node.js servers on same machine

    - by Dtang
    I'd like to set up multiple node.js servers on the same machine (but listening on different ports) for different projects (so I can pull any down to edit code without affecting the others). However I want to be able to access these web apps from a browser without typing in the port number, and instead map different urls to different ports: e.g. 45.23.12.01/app - 45.23.12.01:8001. I've considered using node-http-proxy for this, but it doesn't yet support SSL. My hunch is that nginx might be the most suitable. I've never set up nginx before - what configuration do I need to do? The examples of config files I've seen only deal with subdomains, which I don't have. Alternatively, is there a better (stable, hassle-free) way of hosting multiple apps under the same IP address?

    Read the article

  • Apache AliasMatch and DirectoryMatch not working?

    - by Alex
    I have the following config - please notice the Alias and Directory equivalent -- uncommented they work as expected but the dynamic/regex based versions don't - any ideas??? <VirtualHost *:80> ServerName temp.dev.local ServerAlias temp.dev.local DocumentRoot "C:\wamp\www\temp\public" <Directory "C:\wamp\www\temp\public"> AllowOverride all Order Allow,Deny Allow from all </Directory> # Alias /private/application/core/page/assets/images/ "C:/wamp/www/temp/private/application/core/page/assets/images/" # <Directory "C:/wamp/www/temp/private/application/core/page/assets/images/"> AliasMatch ^/private/application/(.*)/(.*)/assets/images/ /private/application/$1/$2/assets/images/ <DirectoryMatch "^/private/application/(.*)/(.*)/assets/images/"> Options Indexes FollowSymlinks MultiViews Includes AllowOverride None Order allow,deny Allow from all </DirectoryMatch> </VirtualHost>

    Read the article

  • Postfix - Unable to receive emails from certain domains

    - by Emmanuel
    Got a Postfix-Dovecot-Saslauthd setup on Ubuntu 10.04. Problem is there's (at least) one domain that it refuses to accept emails from. I've been getting emails fine from lots of different domains except one. It's really weird, but could some config file or something be blocking certain domains? or IPs? or something? I know the emails are being sent to me, infact I sent a test one myself from this domain and they're just not showing up.

    Read the article

  • capistrano still asks for the 1st password even though I've set up an ssh key???

    - by Greg
    Hi, Background: I've setup an ssh key to avoid having to use passwords with capistrano per http://www.picky-ricky.com/2009/01/ssh-keys-with-capistrano.html. A basic ssh to my server does work fine without asking for passwords. I'm using "dreamhost.com" for hosting. Issue - When I run 'cap deploy' I still get asked for the 1st password (even through the previous 2nd and 3rd password requests are now automated). It is the capistrano command that start with "git clone - q ssh:....." for which the password is being requested. Question - Is there something I've missed? How can I get "cap deploy" totally passwordless? Some excerts from config/deploy.rb are: set :use_sudo, false ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "id_rsa")] default_run_options[:pty] = true thanks PS. The permissions on the server are: drwx------ 2 mylogin pg840652 4096 2010-02-22 15:56 .ssh -rw------- 1 mylogin pg840652 404 2010-02-22 15:45 authorized_keys

    Read the article

  • What is the maximum number of virtualhosts Apache can handle?

    - by FractalizeR
    Hello. What is the maximum number of VirtualHosts Apache can handle on a single machine (I don't mean anything related to load, let's suppose it's irrelevant for the question). And we take only Apache without any proxifying things like nginx. I am asking because on one forum one guy reported that his Apache works unstable with the number of sites more than 400 on a single machine. If you have a config, that handles more than 400, please tell me here. Thanks.

    Read the article

  • Apache server as reverse proxy is removing xmlns info from html tag

    - by Johnco
    I have a Java application running in tomcat, in front of which I have an Apache http server as a reverse proxy. However, the proxy is removing all xmlns data from the html tag, which breaks all the Facebook's FBML which is never parsed. My current config is as follows: ProxyRequests off ProxyHTMLDocType XHTML ProxyPassReverseCookiePath /cas / <Location /> ProxyPass http://localhost:8080/cas ProxyPassReverse http://localhost:8080/cas </Location> ProxyHTMLURLMap /cas / SetOutputFilter proxy-html <Proxy *> Order deny,allow Allow from all Satisfy all </Proxy> Thanks in advance.

    Read the article

  • Http to https behavior for visits from Internet Explorer client

    - by Emile
    My website has an SSL cert (example url: https://subdomain.example.com). Under Apache it's set up for both port 80 and port 443. So under the following configuration, anyone who goes to http://subdomain.example.com is sent to https://subdomain.example.com . But for visits from Internet Explorer, the redirect doesn't happen. Instead, http visits get a "Internet Explorer cannot display the web page." with a list of client-side solutions to try. Any ideas on how to fix the config so IE visits have the same behavior as the other browsers (that is, send http to https automatically)? NameVirtualHost *:443 <VirtualHost *:80> DocumentRoot /var/www/somewebroot ServerName subdomain.example.com </VirtualHost> <VirtualHost *:443> DocumentRoot /var/www/somewebroot ServerName subdomain.example.com # SSL CERTS HERE </VirtualHost> *Tested IE8, IE9 beta

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >