Search Results

Search found 11601 results on 465 pages for 'obiee installs and config'.

Page 264/465 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • How to disable horizontal scrolling within virtualbox on Ubuntu guest, Windows 7 host?

    - by Steven Rosato
    I am using Windows 7 as Host, Ubuntu Karmic as guest OS with guest tools installed and I get an annoying glitch when switching from host to the guest machine: Vertical scrolling switches to horizontal! (using the mouse wheel). Since I don't really care about horizontal scrolling, how can I disable this? I have checked the web and the only thing I found was to play in the xorg.conf file and adding in the section "InputDevice" Option "ZAxisMapping" "4 5" which would enable vertical scrolling only. The thing is, I don't have that section in my config file so I guessed that I would need to add Section "InputDevice" Identifier "VBoxMouse" Driver "vboxmouse" Option "ZAxisMapping" "4 5" EndSection But that does not seem to work after restarting xserver. Any workaround for this?

    Read the article

  • Running own DNS Server for learning purpose

    - by sundar22in
    I would like to run my own DNS server in my laptop for learning purpose. I recently used Google Public DNS and liked it. I wanted to build some thing similar and small for my web browsing. What I vaguely dream of is to use my own DNS server as Primary DNS server and Google public DNS as secondary DNS server. I would like to build my own DNS server gradually by editing the configuration files (If it can be automated it will be great, but have no clues there). Sometimes it sounds like a stupid idea to me, but I am fine with editing config file for each site I want to add to my DNS server. Any pointers/suggestion is welcome.

    Read the article

  • Postfix - Unable to receive emails from certain domains

    - by Emmanuel
    Got a Postfix-Dovecot-Saslauthd setup on Ubuntu 10.04. Problem is there's (at least) one domain that it refuses to accept emails from. I've been getting emails fine from lots of different domains except one. It's really weird, but could some config file or something be blocking certain domains? or IPs? or something? I know the emails are being sent to me, infact I sent a test one myself from this domain and they're just not showing up.

    Read the article

  • My php homepage downloads index.php instead of being processed on Gandi.net

    - by alekone
    if I go to the homepage of my website http://www.website.com (on a brand new server) the index.php gets donwloaded instead than processed. I don't have the same problem on other folders. my .htaccess reads: AddHandler php5-script .php what could this be? I suspect it's something with php config / or htaccess, but I'm not able to figure it out. help please! edit: I don't know if this helps: it's a wordpress installation, I have this problem only on the public part of the website, not on the admin (that renders correctly)

    Read the article

  • signing the web server certificate with the CA key

    - by user1064786
    I have problem in running the command below using openssl-0.9.8e and apache in Ubuntu 11.10. do you have any idea to resolve it? first i was receiving this error: No such file or directory:bss_file.c:169:fopen('openssl.cnf','rb') then i copied my modified openssl.cnf file in the /etc/ssl/ directory. now i receive an error regarding -in option: openssl ca -days 3650 –in server/requests/ciise.concordia.ca.csr –cert ./CA/ConcordiaCA.crt –keyfile ./CA/ConcordiaCA.key –out ./server/certificates/ciise.concordia.ca.crt -config openssl.cnf unknown option –in I also copied ciise.concordia.ca.csr in the upper directory, but the problem still persists I would appreciate any help:)

    Read the article

  • How to restart RoR services after server has been rebooted

    - by Alan DeLonga
    Update I have been searching around to see what services would possibly need to be restarted in my project after reboot. One of them was thinking sphinx, which I finally got to the point where it logs: [Fri Nov 16 19:34:29.820 2012] [29623] accepting connections But I still cant run searchd or searchd --stop because there was no generated sphinx.conf file in the etc/sphinxsearch for more info refer to this open thread on thinking_sphinx after reboot I then turned to looking into restarting unicorn or thin based on some insight I got. The issue is when I check my gems I see one for thin AND unicorn. But when I try to start either one of them they have no file residing in etc/init.d/ where the nginx and sphinxsearch files reside... Would rebooting totally erase the files for an app server like thin or unicorn? We are hosted on Rackspace running ruby 1.9.2p290 rails (3.2.8, 3.2.7, 3.2.0) nginx/1.1.19 notice that there are gems for unicorn and thin but there is no unicorn.rb or thin.rb in my config folder for my app... I am still super lost if any one can give me some insight on some steps to take to figure this out I would really appreciate it. Anything would help, thanks for reading. thin 1.4.1 unicorn 4.3.1 When I run unicorn I get the same issue as referenced here : > /usr/local/bin/unicorn start /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:610:in `parse_rackup_file': rackup file (start) not readable (ArgumentError) from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:76:in `reload' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/configurator.rb:67:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:104:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/unicorn-4.3.1/bin/unicorn:121:in `<top (required)>' from /usr/local/bin/unicorn:19:in `load' from /usr/local/bin/unicorn:19:in `<main>' When I run thin it just opens a command line prompt... /usr/local/bin/thin start >> Using rack adapter Other gems: * LOCAL GEMS * actionmailer (3.2.8, 3.2.7, 3.2.0) actionpack (3.2.8, 3.2.7, 3.2.0) activemodel (3.2.8, 3.2.7, 3.2.0) activerecord (3.2.8, 3.2.7, 3.2.0) activeresource (3.2.8, 3.2.7, 3.2.0) activesupport (3.2.8, 3.2.7, 3.2.0) arel (3.0.2) builder (3.0.0) bundler (1.1.5) carmen (1.0.0.beta2) carmen-rails (1.0.0.beta3) cocaine (0.2.1) coffee-rails (3.2.2) coffee-script (2.2.0) coffee-script-source (1.3.3) daemons (1.1.9) erubis (2.7.0) eventmachine (0.12.10) execjs (1.4.0) faraday (0.8.4) faraday_middleware (0.8.8) foursquare2 (1.8.2) geokit (1.6.5) hashie (1.2.0) hike (1.2.1) httparty (0.8.3) httpauth (0.1) i18n (0.6.0) journey (1.0.4) jquery-rails (2.0.2) json (1.7.4, 1.7.3) jwt (0.1.5) kgio (2.7.4) lastfm (1.8.0) libv8 (3.3.10.4 x86_64-linux) mail (2.4.4) mime-types (1.19, 1.18) minitest (1.6.0) multi_json (1.3.6) multi_xml (0.5.1) multipart-post (1.1.5) mysql2 (0.3.11) oauth2 (0.8.0) paperclip (3.1.1) polyglot (0.3.3) rack (1.4.1) rack-cache (1.2) rack-ssl (1.3.2) rack-test (0.6.1) rails (3.2.8, 3.2.7, 3.2.0) railties (3.2.8, 3.2.7, 3.2.0) raindrops (0.10.0, 0.9.0) rake (0.9.2.2, 0.8.7) rdoc (3.12, 2.5.8) riddle (1.5.3) sass (3.2.0, 3.1.19) sass-rails (3.2.5) sprockets (2.1.3) sqlite3 (1.3.6) sqlite3-ruby (1.3.3) therubyracer (0.10.2, 0.10.1) thin (1.4.1) thinking-sphinx (2.0.10) thor (0.16.0, 0.15.4, 0.14.6) tilt (1.3.3) treetop (1.4.10) tzinfo (0.3.33) uglifier (1.2.7, 1.2.4) unicorn (4.3.1) xml-simple (1.1.1) I am working on a project that was built by another group. I made some modifications to a constants file in the config folder (changing some values for arrays that populated some drop down fields), but the app had to be rebooted before those changes would be recognized. The hosting is through Rackspace, we rebooted through the option on their site. I contacted them and checked the status of our server, the port is open and operational. The problem is the app is not running when you go to the address for the site. Then when I put in the ip address of the server it just says "Welcome to Nginx". But in a log files I see: [Thu Nov 15 02:34:37.945 2012] [15916] caught SIGTERM, shutting down [Thu Nov 15 02:34:37.996 2012] [15916] shutdown complete I am not very versed in server side set up. I have also never worked on a Rails project that had to have specific services started before the application will start. Any insight as to how to figure out what services need to be restarted and how to go about restarting them would be greatly appreciated. I feel kind of dead in the water at this point... Thanks, Alan

    Read the article

  • Strange permission errors with Windows Server 2008

    - by Spirit
    I just don't know a better way to describe my issue that is driving me nuts. I am trying to establish a test domain with virtual machines on a box that has Win7 with VMwware workstation installed. The purpouse with this domain will be so that we can try and test different situations before they go into the production network. I build a VM with WinSrv2008R2 and I am using that VM as a template to make other servers for the domain by making clones of it. Now I raise a DC with one clone and a member server with another clone - I add the server to the domain. I am following a standard procedure as always (it is not my first domain). Then I make an admin account and I am adding the admin to be a member of the Domain and Enterprise Admins group. That admin is admin with full priviledges on the DC.. no problem there. But on the other server has ... somewhat half the privileges and I cant log in via RDP. I tryed with another account. Same issues. For example (with half the privileges): I can't open the Even Viewer if I go via Start - Administrative Tools - Event Viewer. But I can open the Even Viewer via the server manager. You can notice this on the image below. I mean WTF??? I am going crazy, I haven't experienced anything similar in my three years of expertise. I already lost 3 days troubleshooting this. Could this be related with the cloning? Perhaps if I make fresh installs of WinSrv2008 there won't be any problems? I've had raised test domains as VMs on other occasions before, and there weren't any problems then. This is VMware Workstation 8. I've made clones before, on Workstation 7 it didn't had any problems. Anyone has any ideas? UPDATE: This is the info from the event log when I try to access via RDP: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: pat.coleman Account Domain: lab Failure Information: Failure Reason: Domain sid inconsistent. Status: 0xc000006d Sub Status: 0xc000019b

    Read the article

  • Nginx no longer servers uwsgi application behind HAProxy - Looks for static file instead

    - by Ralph
    We implemented our web application using web2py. It consists of several modules offering a REST API at various resources (e.g. /dids, /replicas, ...). The API is used by clients implementing requests.py. My problem is that our web app works fine if it's behind HAProxy and hosted by Apache using mod_wsgi. It also works fine if the clients interact with nginx directly. It doesn't work though when using HAProxy in front of nginx. My guess is that HAProxy somehow modifies the request and thus nginx behaves differently i.e. looking for a static file instead of calling the WSGI container. Unfortunately I can't figure out what's exactly going (wr)on(g). Here are the relevant config sections of these three component's config files. At least I guess they are interesting. If you miss anything, please let me know. 1) haproxy.conf frontend app-lb bind loadbalancer:443 ssl crt /etc/grid-security/hostcertkey.pem default_backend nginx-servers mode http backend nginx-servers balance leastconn option forwardfor server nginx-01 nginx-server-int-01.domain.com:80 check 2) nginx.conf: sendfile off; #tcp_nopush on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; server { server_name nginx-server-int-01.domain.com; root /path/to/app/; location / { uwsgi_pass unix:///tmp/app.sock; include uwsgi_params; uwsgi_read_timeout 600; # Requests can run for a serious long time } 3) uwsgi.ini [uwsgi] chdir = /path/to/app/ chmod-socket = 777 no-default-app = True socket = /tmp/app.sock manage-script-name = True mount = /dids=did.py mount = /replicas=replica.py callable = application Now when I let my clients go against nginx-server-int-01.domain.com everything is fine. In the access.log of nginx lines like these are appearing: 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /dids/user.ogueta/cnt_mc12_8TeV.16304.stream_name_too_long.other.notype.004202218365415e990b9997ea859f20.user/dids HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5282 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 5094 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:20 +0200] "POST /replicas/list HTTP/1.1" 200 528 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "GET /dids/mc13_14TeV/dids/search?project=mc13_14TeV&stream_name=%2Adummy&type=dataset&datatype=NTUP_SMDYMUMU HTTP/1.1" 401 73 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /replicas/list HTTP/1.1" 200 713 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" 128.142.XXX.XX0 - - [23/Aug/2014:01:29:21 +0200] "POST /dids/attachments HTTP/1.1" 201 17 "-" "python-requests/2.3.0 CPython/2.6.6 Linux/2.6.32-358.23.2.el6.x86_64" "-" But when I switch the clients to go against HAProxy (loadbalancer.domain.com:443), the error.log of nginx shows lines like these: 2014/08/23 01:26:01 [error] 1705#0: *21231 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21232 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21233 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21234 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XX1, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21235 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer" 2014/08/23 01:26:02 [error] 1705#0: *21238 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21239 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21242 open() "/usr/share/nginx/html/replicas/list" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /replicas/list HTTP/1.1", host: "loadbalancer.domain.com" 2014/08/23 01:26:02 [error] 1705#0: *21244 open() "/usr/share/nginx/html/dids/attachments" failed (2: No such file or directory), client: 128.142.XXX.XXX, server: localhost, request: "POST /dids/attachments HTTP/1.1", host: "loadbalancer.domain.com" As you can see, that request looks the same, only the client IP changed, from the client's host to the one from loadbalancer.domain.com. But due to what ever reasons ngxin seems to assume that it is a static file to be served which eventually results in the file not found message. I searched the web for multiple hours already, but without much luck so far. Any help is very much appreciated. Cheers, Ralph

    Read the article

  • Local Area Connection in Slacware 13

    - by asdasd
    I have windows xp and slackware 13 on one computer, and the ISP provided me a new modem. There was manual how to configure it, so i start the web browser and typed it's ip address 192.168.1.1 and the web interface of the modem appeared so i logged in, that was easy. But under slackware, i don't know how to enter in the modem config / web interface. I type in 192.168.1.1 but it's not working. Here's the output of ifconfig eth0 : eth0 Link encap:Ethernet HWaddr 00:a1:b0:01:18:28 inet addr:169.254.73.8 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 Memory:febff400-febff4ff How can i log in into the modem from linux, i.e. find it's assigned ip under slackware ? Thank you.

    Read the article

  • apache2 slow responding (debian)

    - by baloo
    I'm running an apache2 2.2.9 webserver with modpython and mpm_worker_module. The current config for the mpm is ServerLimit 32 StartServers 10 MaxClients 800 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 The server has 1G of ram and a 100Mbit connection. Checking netstat -na | grep ESTABLISHED | wc -l gives me a number between 50 - 60. The load is about 1.0 Every pageload is also cached by memcached. I can't see why the server is so slow in responding to new connections, sometimes droping them completely? Also tried disabling iptables to make sure it's not because of a full state table or something like that. The only thing in dmesg is a lot of spam about "TCP: Treason uncloaked!"

    Read the article

  • SQL 2008 R2 Named Instance Client Connectivity Issues?

    - by Jerry Dodge
    We're upgrading our software from using SQL 2000 to 2008 R2. Our customers will be installing an update which uninstalls 2000 and installs 2008 R2 under the same instance. So if no instance existed, then no instance name will be set (default). However, the problem starts with the customers which have a named SQL instance. Starting in 2008 R2 (not sure of ones before), for some reason, a client connecting to the server by its instance name is unsuccessful. I'm testing from the Management Studio - if I can't connect this, then nothing can connect. I browse network servers, and find the specific server\instance in the list. But, upon trying to connect to an instance name like MyServer\INST, I get: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) I do in fact have TCP/IP and Named Pipes protocols enabled, this is the first thing I did. When I connect to the server using a comma (,) and port number like MyServer, 49195, it works just fine. So it appears that client computers are just unable to identify the instance names. This has happened on all our installations of SQL 2008 R2 and from all client computers, including Win 7, XP, Vista, Server 2008, and Server 2003. We never experienced such issues on earlier versions of SQL. The problem even persists if the firewalls and antiviruses are all disabled. Now, this is a large update which we will be distributing soon to all our customers, and we want to minimize the interaction they need with us to get this installed. We absolutely hate the idea of using a port number, because it will always be different, and we would have to modify each client to point to this server/port. Some of our customers may have hundreds of client computers. How do I make client connections to a named SQL instance work again? After all, this is the whole purpose of named instances, and if a client can't connect to this instance by its name, then what is it even named for? EDIT It was mentioned to make sure SQL Browser is running, so I checked, and it is running. The server is also able to connect to its self (locally) - just external connections are refused. UPDATE After more careful checking, I learned the firewall wasn't completely disabled when testing, and upon disabling it completely, this works. So it appears that SQL Browser is being blocked by the firewall from external clients from accessing.

    Read the article

  • after redmine install i see only the filesystem

    - by derty
    After installing redmine, i cann only access the filesystem! I reinstalled redmine 2-3 times in different ways. Used this "how to"s: http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_using_Debian_package http://www.redmine.org/projects/redmine/wiki/HowTo_Install_Redmine_210_on_Debian_Squeeze_with_Apache_Passenger http://beeznest.wordpress.com/2012/09/20/installing-redmine-2-1-on-debian-squeeze-with-apache-modpassenger/ the webserver of 10.0.0.14 is going to be behind a reverse apache proxy. but for know i'm working directly in the system. This change wouldn't be a problem. I use this on a bunch of other services. The Database does exist and i can enter it. The configuration file config/database.yml is set up right, with the data i use to enter as redmineuser. So does one have an idea why it is not working like i wish?

    Read the article

  • Bulk "open with" change on OS X 10.6

    - by railmeat
    I use Microsoft remote desktop to connect from my mac to the windows machine I work on. I have a directory with about 50 different rdp files. (The config files for each machine). They go changed to open with the remote desktop in Windows XP on Parallels. Is there a command I can use to change all those back to opening with the remote desktop on my Mac? I can use the right mouse click "Always open with" command, or the equivalent on the "Get Info" menu, but I would like to find a way to make that change in bulk. "Change All..." button on the "Get Info" menu does not have any effect.

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • Cron won't use msmtpd to send emails in case of failed cronjob

    - by Glister
    I'm trying to configure a machine so that it will send me an email if one of the cronjobs output something in case of an error. I'm using Debian Wheezy. Cron is working normally (without the email functionality). msmtp is installed and configured. Have already symlinked /usr/{bin|sbin}/sendmail to /usr/bin/msmtpd. I can send email by using: echo "test" | mail -s "subject" [email protected] or by executing: echo "test" | /usr/sbin/sendmail Without the symlink (/usr/sbin/sendmail) cron will tell me that: (CRON) info (No MTA installed, discarding output) With the symlinks I get: (root) MAIL (mailed 1 byte of output; but got status 0x004e, #012) Can you suggest how to config the cron/msmtp pair? Thanks!

    Read the article

  • problem with crating table on phpMyAdmin database

    - by tombull89
    Hello all, I'm running a phpMyAdmin Database on my web package on a 1and1-hosted server. I've managed to set up a database in the control panel, have uploaded all to root/phpmyadmin and changed the config.ini.php file to point at 1and1's database server (because that's the way they do it). I can go to the web interface and get to the main page, but all it shows is the database name and I can't find how to create any tables. I know it's a long shot but I'm almost out of ideas. Also, 1and1 have their own phpmyadmin panel, which is pretty annoying to use, and a 1and1 webdatabase which I have barely looked at. Help and suggestions much appriciated.

    Read the article

  • Stunnel too many clients

    - by davidsmalley
    I'm trying to hook up stunnel and haproxy to forward https connections through to some backend servers. I've got haproxy setup right, and I seem to have stunnel set up right. Trouble is that when I hit the setup with a load test after a while I start to see these log entries: 2010.05.05 11:24:43 LOG7[3498:3086792368]: https accepted FD=512 from 10.195.158.225:52579 2010.05.05 11:24:43 LOG4[3498:3086792368]: Connection rejected: too many clients (=500) I guess I've hit a limit somewhere but I wasn't sure how to fix it, there doesn't seem to be a config file option for stunnel to change this. Does anyone know how to configure stunnel for a potentially large number of connections?

    Read the article

  • Routing to various node.js servers on same machine

    - by Dtang
    I'd like to set up multiple node.js servers on the same machine (but listening on different ports) for different projects (so I can pull any down to edit code without affecting the others). However I want to be able to access these web apps from a browser without typing in the port number, and instead map different urls to different ports: e.g. 45.23.12.01/app - 45.23.12.01:8001. I've considered using node-http-proxy for this, but it doesn't yet support SSL. My hunch is that nginx might be the most suitable. I've never set up nginx before - what configuration do I need to do? The examples of config files I've seen only deal with subdomains, which I don't have. Alternatively, is there a better (stable, hassle-free) way of hosting multiple apps under the same IP address?

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Prevent Internet Explorer 8 from exiting when closing the last tab

    - by LongTTH
    Firefox doesn't exit when I close the last tab, just by doing some customization in about:config. But now, my company has to work with some website that only works well on Internet Explorer. I currently use Internet Explorer 8 on Windows XP SP3. So, how do I prevent Internet Explorer 8 from exiting when closing the last tab? I've searched for such a feature for a while but found nothing helpful. FYI I gave up with my "customization habit", now I'm trying to live with IE world, ... phew...

    Read the article

  • Which are the most important directories to backup on a Linux server?

    - by QAH
    Hello everyone! I'm running an Ubuntu 9.10 Linux server. I'm trying to find a way to backup the machine while it is running and from what I see, this eliminates the disk clone utilities. All of the disk clone stuff I have seen for Linux requires that you reboot into a special live CD. So my question is this, what is the best solution for backing up the system while it is running? Also, I don't really care about the OS config too much, I just want to be able to keep my stored files and my programs that I have installed on it. Thanks

    Read the article

  • IIS7 FastCGI downloads quit at 4128760 bytes on slow connections

    - by eingko
    I'm using FastCGI via IIS7 to host a PHP application. For whatever reason downloads that are streamed via PHP (i.e. a script that outputs a file/bytes as a response) download perfectly fine on high-speed connections but on anything slower (even on DSL) quits at EXACTLY 4128760 bytes (~3.9MB) - which makes me think it's a configuration issue... We only started having this problem when we switched from Apache to IIS - this also points to a configuration problem, I think. But if it's a configuration issue, why would it only affect slower connections? Does anyone know where (or how) I could possibly change a setting like this? I've tried changing the idleTimeout, executionTimeout, and activityTimeout values in my web.config but this hasn't helped at all. Any help or direction would be greatly appreciated. Thanks in advance.

    Read the article

  • vsftpd: refusing to run with writable root inside chroot

    - by MrROY
    I want to setup a anonymous only ftp server (able to upload files). Here is my config file: listen=YES anonymous_enable=YES anon_root=/var/www/ftp local_enable=YES write_enable=YESr. anon_upload_enable=YES anon_mkdir_write_enable=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES dirmessage_enable=YES use_localtime=YES secure_chroot_dir=/var/run/vsftpd/empty rsa_cert_file=/etc/ssl/private/vsftpd.pem pam_service_name=vsftpd But when i try to connect it: kan@kan:~$ ftp yxxxng.bej Connected to yxxx. 220 (vsFTPd 2.3.5) Name (yxxxg.bej:kan): anonymous 331 Please specify the password. Password: 500 OOPS: vsftpd: refusing to run with writable root inside chroot() Login failed Can anyone help ?

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >