Search Results

Search found 4311 results on 173 pages for 'unix utils'.

Page 123/173 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • vsFTPd and iptables - how to configure them in CentOS 5.5?

    - by Vincenzo
    I've installed vsFTPd in CentOS 5.5, on TWO servers, and added this rule to their iptable-s: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT Looks like this is not enough, since when I'm trying to upload a file from one server to another, I'm getting this result (IP address is masked): # ftp 99.99.99.99 Connected to …com (99.99.99.99). 220 (vsFTPd 2.0.5) Name (99.99.99.99:root): vinny 331 Please specify the password. Password: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> ls 227 Entering Passive Mode (99,99,99,99,107,74) ftp: connect: No route to host I've found a few articles in the net about the second rule I have to add to iptables, but I didn't find the right syntax for it. Could you please help?

    Read the article

  • How do you pronounce Linux?

    - by Xerxes
    I'm tired of the old fart at work who keeps coming upto my desk and telling me all about his "years of experience in working with Unix and Lye-nix". I couldn't vent it out at him because that would be wrong, so I'm going to vent it out here - because obviously (that's the right thing to do...). Anyway, for all the people that practice in this disgusting behaviour - the pronunciation is.... (Hmmm - anyone know phonetics?) - "Li-nix" Note: Despite hating him for this - he is otherwise a very nice (but sometimes rather annoying) person. Now... to formally make this a "question" - Could someone write the phonetics for pronouncing "Linux", and also the notorious "Lye-nix", so I can make a note of it for future ventings? I think this is right... L?n?x, NOT L?n?x. ...or perhaps... L?n?x, NOT L?n?x* Can someone confirm the correct phonetics? (Listen to Linus on the matter).

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Linux - Multiple service statuses with one command

    - by Jimbo
    I'm trying to retrieve a list of multiple service statuses in Unix. I'm using the service command: man page. The statuses all start with the transmission-daemon string, for example. I require the ability to list multiple services' statuses, with a single command. Here is what I'm currently trying (and failing) with: Here I'm trying to grab a list of statuses using grep. service $(ls /etc/init.d | grep "transmission-daemon") status Here I'm trying to list all statuses, and then grep for them. service --status-all | grep "transmission-daemon" This produces the following, which isn't much help: How can I effectively achieve what I require with a single command, so that I can then continue piping to awk for further customisation? Desired example output: transmission-daemon started transmission-daemon2 stopped transmission-daemon3 started

    Read the article

  • Redirecting to Login page in apache

    - by Shailesh Sutar
    I am working on OTRS where i want to set OTRS Login page on otrs.mydomain.com. I am having machine CentOS release 6.2 (Final). Currently I am accessing it,using otrs.mydomain.com/otrs/customer.pl for customer login AND otrs.mydomain.com/otrs/index.pl for admin login. I changed DocumentRoot to /opt/otrs but its not working as it should. OTRS is installed in /opt/otrs/ I am using Apache Server version: Apache/2.2.15 (Unix). Now i am stuck.

    Read the article

  • timeout with apache & php w/ each virtual host has his own user process

    - by acemtp
    I have 10 unix users in /home/. Each user is for a specific subdomain for example user www in /home/www/public_html is for www.mywebsite. blog in /home/blog/public_html is for blog.mywebsite. 90% is php and 10% ror for the moment i use apache + fastcgi that use SuexecUserGroup to setup the process with the good user. it seems to works but i have a strange behavior where after a few hours/days, the server stop answering (timeout) but the cpu load is still very low (it's a big server), the apache status display lot of "W" Sending Reply states but there's still 50 idle workers so it should be able to answer. in the older server (lot of slower) we add only one user and using mod_php and we never had this issue. is there another way to do that without fastcgi and SuexecUserGroup or do you know what's going wrong?

    Read the article

  • How to figure out which directory is web server root?

    - by matt
    I want to view websites hosted on my Mac when running Windows VMware Fusion. I have an entry in the Windows hosts file to enable the routing: #ip of my mac domain i use on the VM to access it 192.168.1.70 mymac However, it resolves to an empty directory as a 404 is generated. I can see the access log on my Mac that everything is OK access wise. Firefox on VMware states the following response headers: Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Any ideas how I can figure out what directory is being served? I am lost in a maze of twisty httpd.conf passages. localhost on my Mac resolves to my ~/Sites directory. 192.168.1.70 resolves to the same empty directory/404. Thanks.

    Read the article

  • The Coolest Server Names

    - by deadprogrammer
    These days server naming is a bit of a lost art. Most large organizations don't allow for fanciful names and name their servers with jumbles of digits and letters. In the olden days just about every system administrator came up with a unique naming scheme, well, sometimes unique - many just settled for Star Trek characters. To this day my favorite server name is Qantas - a Unix server that Joel Spolsky has or used to have. Why Qantas? You'd have to ask Rainman. So my question is this - what is the coolest server name or naming convention that you encountered? Let the geekfest begin. This question is marked "community wiki", so I am not getting any "rep" from it.

    Read the article

  • View rotated log files Mac OS X Server (*.?.gz)

    - by Meltemi
    Trying to look at some of our older log files and find they're cryptic "Unix Executable Files". This particular server I'm working with is an older Mac OS X Server (10.4 - Tiger). -rw-r----- 1 root admin 36 1 Jun 15:48 wtmp -rw-r--r-- 1 root admin 578 27 May 17:40 wtmp.0.gz -rw-r----- 1 root admin 89 26 Apr 13:57 wtmp.1.gz -rw-r----- 1 root admin 78 29 Mar 16:43 wtmp.2.gz -rw-r----- 1 root admin 69 15 Feb 17:21 wtmp.3.gz -rw-r----- 1 root admin 137 16 Jan 13:09 wtmp.4.gz i'm using zless to try and view the contents of the .gz files. and what i see is unreadable: ... <DF>^R<AF>ttyp1^@^@^@joe54^@^@^@^@^@108.184.63.22^@^@^@^@K<DF>"<B8>ttyp1^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@K<DF>%<A1>console^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@K<E0>1 ~^@^@^@^@^@^@^@shutdown^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@K<E0>1^L~^@^@^@^@^@^@^@reboot^@^@^@^@^@^@ ... same goes for system.log.0.gz, etc... anything that's been rolled in compressed .gz files. What am i missing?

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • sed comand - remove virus from wordpress [duplicate]

    - by EliaszKubala
    This question already has an answer here: How do I deal with a compromised server? 12 answers I have malicious code in every php file. This malicius code is auto paste at the beginning of file. I want to remove this with UNIX command from console. This is malicious code: <?php $guobywgpku = '..... u=$bhpegpvvmc-1; ?> I write this RegExp, "/<\?php \$guobywgpku.*\?>/m" and this RegExp work. I tested it here. The problem is, write command which remove this malicious code from every php file on the sever. Please Help me. Now i have something like this. sed "/<\?php \$guobywgpku.*\?>/m" index.php

    Read the article

  • How to call a program and exit from the shell (the caller) when program is active?

    - by Jack
    I want to run a program with GUI, by typing into konsole: foo args … and exit from the shell (that's the caller) when the program (foo) is active. How do I this? Is there a Linux/Unix built-in command/program to do it? I'm not a shell-man, really. I know that it's possible by writing a small program in C or C++ (any other programming language with small I/O interface on POSIX) programming language with the fork() and one-of exec*() function family. It may take some time; I'll do it only if there is no native solution. Sorry for my bad English; it's not my native language. Also, not sure on tags, please edit for me, if I'm wrong. If it matters, I'm using OpenSUSE 10.x.

    Read the article

  • Why do we still have to use drive letters to identify file systems?

    - by Charles E. Grant
    A friend has run into a problem where they installed Windows 7 from an external drive, and the internal boot drive is now assigned to H:. Theoretically this shouldn't cause problems because there are programming interfaces for getting the drive letter for the system drive. In practice though, there are quite a few programs that assume that C: is the only possible location for the system directories, and they refuse to run with the system directories on H:. That's not Microsoft's fault, but it's a pain none-the-less. The general consensus seems to be that a re-install, setting the internal boot drive to C:, is the only way to avoid fix these problems. UNIX-like systems display all file systems in a single unified directory tree and mostly seem to avoid problems like this. Is it possible to configure a Windows system without reference to drive letters, or does the importance of backwards compatibility mean that Windows will be working with drive letters from now until doomsday?

    Read the article

  • PHP-FPM runs PHP scripts as root

    - by fwalch
    I have a web server setup using nginx and PHP-FPM listening on a Unix socket. In my php-fpm.conf, I have specified user = www group = www When I run ps aux, I can see that the php-fpm worker processes run as www; the php-fpm master process runs as root. However, I noticed that PHP scripts are executed as root; at least that's the output of echo get_current_user(); What can I do to run scripts as the www user? How can this even happen if the worker processes run as www?

    Read the article

  • tar incremental backup is backing everything up, every time

    - by Cyclic
    I made an incremental backup about 10 months ago (on Jan 27, 2013), creating a .snar metadata file. Now, when I try to make an incremental backup using tar --create --file=dropbox_incremental_1.tar --listed-incremental=dropbox_0.snar Dropbox the command just re-backs up everything. I'm not an expert at Unix timestamps, but I noticed that virtually all of my directory timestamps are way more recent than the last time they changed. For my actual files, they look like this: Access: 2013-03-12 19:04:51.000000000 -0500 Modify: 2012-09-30 15:10:47.000000000 -0500 Change: 2013-03-12 19:04:51.306209672 -0500 The 'Modify' timestamp seems correct, but the files were definitely not changed (at least not doing anything that I know of) at the time they say they were. These files still seem to go into the incremental archive. What's happening here? Is there a way to tell tar to look at the 'modify' timestamp? Isn't this what it's supposed to be doing?

    Read the article

  • How to remove a package I compiled and installed manually?

    - by macek
    I recently compiled and installed Git on a new install of Mac OS 10.6 but it didn't install the documentation. I now realize I should've used the precompiled package offered here: http://code.google.com/p/git-osx-installer/downloads/list How do I remove all the files that I added to my system using make install with the Git source code? Edit: I've had similar problems in the past with other packages, too. For example, ./configure with the incorrect --prefix= or something. What's the general practice for removing unix packages?

    Read the article

  • Upgraded to Ubuntu 12.04 from 10.04 and have to transfer database from Postgresql 8.4 to 9.1

    - by Stpn
    I upgraded server with a Rails application to Ubuntu 12.04 from 10.04 and cannot connect to Postgresql database now... Here is the error message from Rails app: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432" Also the pg_ctl start is not recognized as a command.. EDIT: Turns out my database in on Postgresl 8.4 and my sever is now running on 9.1. So all the database files / configs are on 8.4.. How can I transfer them? Just straight copy from old pg_hba.conf?

    Read the article

  • How to run django on localhost with nginx and uwsgi?

    - by user2426362
    How to run django on localhost with nginx and uwsgi? This im my config but not works. nginx: server { listen 80; server_name localhost; access_log /var/log/nginx/localhost_access.log; error_log /var/log/nginx/localhost_error.log; location / { uwsgi_pass unix:///tmp/localhost.sock; include uwsgi_params; } location /media/ { alias /home/user/projects/zt/myproject/myproject/media/; } location /static/ { alias /home/user/projects/zt/myproject/myproject/static/; } } uwsgi: [uwsgi] vhost = true plugins = python socket = /tmp/localhost.sock master = true enable-threads = true processes = 2 wsgi-file = /home/user/projects/zt/myproject/myproject/wsgi.py virtualenv = /home/user/projects/zt chdir = /home/user/projects/zt/myproject touch-reload = /home/user/projects/zt/myproject/reload This config work on my ubuntu server with normal domain (not localhost) but on localhost not working. If I run localhost in web browser I have Welcome to nginx!

    Read the article

  • Virtual Network Printer

    - by user113720
    I'm pretty new to Microsoft Servers so don't blame me if the question isn't that smart [I'm a Unix guy]. I need to install a Virtual Printer of a Microsoft Server 2008 r2. The requirements are: The printer must print on a file {whatever file... txt or pdf } The printer must run on a server The printer must accept plaintext from a specific IP:port The connection between the device that prints and the server is a local network I've tried to install a virtual printer, but I cannot specify the constraint about the socket from which receive data to print. Thank you so much

    Read the article

  • Set origin for forwarder based on virtual domain in Postfix

    - by Andrew Koester
    I currently have a machine set up to operate with two domains. The main name uses the standard Unix-user delivery, and the second domain is entirely virtual (using virtual_alias_domains and virtual_alias_maps), with the second domain only forwarding mail. However, when mail is forwarded, it still appears to be delivered by the host of the primary domain (presumably set by myorigin.) Is it possible to get it so when mail is forwarded to the virtual domain, it appears to be delivered by it as well? That domain is on another IP and I'd like to use it so the mail stays consistent. Thanks.

    Read the article

  • postgresql 9.1 Multiple Cluster on same host

    - by user1272305
    I have 2 cluster databases, running on the same host, Ubuntu. My fist database port is set to default but my second database port is set to 5433 in the postgresql.conf file. While everything is ok with local connections, I cannot connect using any of my tools to the second database with port 5433, including pgAdmin. Please help. Any parameter that I need to modify for the new database with port 5433? netstat -an | grep 5433 shows, tcp 0 0 0.0.0.0:5433 0.0.0.0:* LISTEN tcp6 0 0 :::5433 :::* LISTEN unix 2 [ ACC ] STREAM LISTENING 72842 /var/run/postgresql/.s.PGSQL.5433 iptables -L shows, Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Advanced command line editing for Windows?

    - by Ben Collins
    I'm developer who was "born and bred" on Linux and BSD systems, and I've become accustomed to having advanced tools for the console (posix shells like bash, for example). My career has taken a twist that means I'm working in a Windows environment most of the time, and the console capabilities are really poor by comparison. The traditional windows console environment is a complete joke, and even most of the third party attempts at improving things aren't a lot better. PowerShell is a huge step in the right direction, but the console applications themselves are still way behind where unix has been for 20 years. Does anyone know of a PowerShell console application that supports advanced command line editing like posix shells do? I'm particularly interested in emacs-mode editing, and I'd also like to be able to resize my window to an arbirary size, unlike the native console app that comes with Windows.

    Read the article

  • vim on Windows -Turn syntax highlighting OFF

    - by sandro
    I have downloaded Vim 7.4 on Windows 7 64 bit, and would like to turn off syntax highlighting. I have been using Vim for a long time on Unix, so I know to place "syntax off" in my vimrc. However, even though "syntax off" is in my vimrc, for some reason when I edit my vimrc the syntax highlighting is always on. I have deleted every other vimrc on my system (listed in the output of :version) except for my $HOME\_vimrc, but the syntax highlighting is still there (even after creating new cmd's). Any help would be greatly appreciated.

    Read the article

  • Windows: Is there something to see and remotely control single(!) windows on a remote PC?

    - by Horst Walter
    Is there something where I can see and control single(!) windows of PC1 on PC2 remotely. Basically like it is possible with X-Windows. I am not talking / asking about A software which displays the whole desktop remotely (like VNC, Windows RDP). A X-windows server for windows, to connect to Linux. The answer here ( Windows Remote Desktop Connection for just a single window (or a single program) ) requires Windows Server 2008. I need to run this on two Windows 7 machines. Example: PC1 shows three windows, and I transfer, see, and control window 2 on PC2. -- Edit -- I have checked whether there is an X-Server for Windows <- Windows. But there seems not to be one other than Unix <- Windows: http://stackoverflow.com/questions/40453/what-is-a-good-and-free-x-server-for-windows

    Read the article

  • How to properly start gvfs without gnome?

    - by 9000
    I have a Debian testing box with Xfce (no Gnome, no Nautilus). It has all gvfs-related stuff installed, including all backends and fuse interface. But any attempts to gvfs-mount anything (like sftp://... or smb://...) fail with error opening file: Operation not supported, and gigolo shows only 'unix device (file)' in the list of supported protocols. My ~/.gvfs has rwx permissions, and I'm a member of fuse group; other fuse-related stuff works for me. What do I do? Where to look?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >