Search Results

Search found 91112 results on 3645 pages for 'media server'.

Page 1430/3645 | < Previous Page | 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437  | Next Page >

  • Phpmyadmin location for nginx

    - by multiformeinggno
    I installed nginx and phpmyadmin. I set up a domain with these parameters to test phpmyadmin: server { listen 80; server_name domain.com; root /usr/share/phpmyadmin; index index.php; fastcgi_index index.php; location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; } } And everything works properly (if I visit the domain I can login to phpmyadmin). The problem is that it was just for testing phpmyadmin, now I'd like to move this to my 'default' site. But I can't figure out how to have it on /phpmyadmin. Here's the config for the 'default' nginx site (where I'd like to put this /phpmyadmin location): server { server_name blabla; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /var/www/default; index index.php index.html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } ### NginX Status location /nginx_status { stub_status on; access_log off; } ### FPM Status location ~ ^/(status|ping)$ { fastcgi_pass unix:/var/run/php5-fpm.sock; access_log off; } }

    Read the article

  • IIS URL Rewrite HTTP to HTTPS with Port

    - by Andy Arismendi
    My website has two bindings: 1000 and 1443 (port 80/443 are in use by another website on the same IIS instance). Port 1000 is HTTP, port 1443 is HTTPS. What I want to do is redirect any incoming request using "htt p://server:1000" to "htt ps://server:1443". I'm playing around with IIS 7 rewrite module 2.0 but I'm banging my head against the wall. Any insight is appreciated! BTW the rewrite configuration below works great with a site that has an HTTP binding on port 80 and HTTPS binding on port 443, but it doesn't work with my ports. P.S. My URLs intentionally have spaces because the 'spam prevention mechanism' kicked in. For some reason google login doesn't work anymore so I had to create an OpenID account (No Script could be the culprit). I'm not sure how to get XML to display nicely so I added spaces after the opening brackets. < ?xml version="1.0" encoding="utf-8"? < configuration < system.webServer < rewrite < rules < rule name="HTTP to HTTPS redirect" stopProcessing="true" < match url="(.*)" / < conditions trackAllCaptures="true" < add input="{HTTPS}" pattern="off" / < /conditions < action type="Redirect" redirectType="Found" url="htt ps: // {HTTP_HOST}/{R:1}" / < /rule < /rules < /rewrite < /system.webServer < /configuration

    Read the article

  • s3cmd fails too many times

    - by alfish
    It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network: root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/ bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 20.95 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.00) WARNING: Waiting 3 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 23.96 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.01) WARNING: Waiting 6 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.71 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.05) WARNING: Waiting 9 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.86 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.25) WARNING: Waiting 12 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 15.79 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=1.25) WARNING: Waiting 15 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 12288 of 2711541519 0% in 2s 4.78 kB/s failed ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file. This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with --acl-private flag (s3cmd version 1.0.1) I appreciate if you suggest some solution or a lightweight alternative to s3cmd. Thanks

    Read the article

  • Installing Munin on Centos 6

    - by justinhj
    I've hit problems installing munin on Centos 6. This seems to be a conflict between parts of Perl. I think the version of Perl is newer on Centos 6 (v5.10.1) When installing munin via yum I get errors relating to perl dependencies as below. I'm not a big enough whiz at yum or rpm to figure out the issue. Munin documentation does not yet talk about installing to Centos 6.0 Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(Net::SNMP) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: bitstream-vera-fonts Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(HTML::Template) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-SNMP Error: Package: munin-common-1.4.2-0.rpl1.el5.noarch (/munin-common-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(DBI) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Log::Log4perl) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(LWP::Simple) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(RRDs) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Date::Manip) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(CGI::Fast) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Time::HiRes)

    Read the article

  • nginx with stub_status.. need help with nginx.conf

    - by Amar
    Hello I am trying to setup nginx with stub status so I can monitor nginx requests etc.. with serverdensity.com. I needed to put something like this in nginx.conf server { listen 82.113.147.xxx; location /nginx_status { stub_status on; access_log off; allow 82.113.147.xxx; deny all; } } And with this monitoring acctualy works. However It seems I lost "include" part in my nginx.conf and now none of vhosts in sites-enabled work. Here is a bit more of my nginx.conf http { include /etc/nginx/mime.types; default_type application/octet-stream; server_tokens off; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 82.113.147.226; location /nginx_status { stub_status on; access_log off; allow 82.113.147.226; deny all; } } } Hope someone can help me with this , as I belive its minor issue, its just that "I dont see it" ty

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Director

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • Issue configuring Oracle database for SSL

    - by Santhosha Kaldambe
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • Snort/Barnyard2 Logging

    - by Eric
    I need some help with my Snort/Barnyard2 setup. My goal is to have Snort send unified2 logs to Barnyard2 and then have Barnyard2 send the data to other locations. Here is my currrent setup. OS Scientific Linux 6 Snort Version 2.9.2.3 Barnyard2 Version 2.1.9 Snort command snort -c /etc/snort/snort.conf -i eth2 & Barnyard2 command /usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort/barnyard.waldo & snort.conf output unified2: filename snort.log, limit 128 barnyard2.conf output alert_syslog: host=127.0.0.1 output database: log, mysql, user=snort dbname=snort password=password host=localhost With this setup, barnyard2 is showing all of the correct information in the database and I'm using BASE to view it on the web GUI. I was hoping to be able to send the full packet data to syslog with barnyard2 but after reading around, it seems that it is impossible to do that. So I then started trying to modify the snort.conf file and add lines like "output alert_full: alert.full". This definitely gave me a lot more information but still not the full packet data like I want. So my question is, is there anyway I can use barnyard2 to send the full packet data of alerts to a human readable file? Since I can't send it directly to syslog, I can create another process to take the data from that file and ship it off to another server. If not, what flags and/or snort.conf configuration would you recommend to get the most data possible but still be able to handle quite a bit of traffic? In the end of it all, these alerts will be shipped to a central server via a SSH tunnel. I'm trying to stay away from databases.

    Read the article

  • How do I improve my incremental-backup performance?

    - by Alistair Bell
    I'm currently using the traditional rsync+cp -al method to create incremental/snapshot backups of our server tree. The backups are going onto a pair of eight-disk towers connected to the backup machine (a Sandy Bridge machine with 16 GB of RAM, running CentOS 5.5) via four eSATA connections (four disks per connection). Each disk is a regular 2 TB disk, so we have 32 TB of disk space connected to the backup machine. We're backing up about 20 TB of data on the servers with this. The problem is that each daily backup is taking more than 24 hours, and the real time-killer isn't the actual rsync, but the time it takes to perform a cp -al of the tree locally on the backup machine. It's taking more than 12 hours just to make the shadow copy of the tree, and as far as I can tell the performance backlog is at the disk (top shows the cp using a lot of RAM but not a lot of CPU and mostly in uninterruptible-sleep state) We have the server data split into four major volumes (and a few minor ones), and each of these backups runs in parallel (with some offsets in the cron to try to get some disks' cp done first). There are two volumes on the backup drive, both striped LVM volumes of 16 TB each. So obviously I need to improve the performance because it's unusable as it stands. The first question is: when CentOS 6 comes out, with support for btrfs, will making snapshots of subvolumes with btrfs substantially increase this performance? The second is: is there a way, with ext3 or something else supported in CentOS 5 or 6, to 'encourage' it to put the directories/inodes in one part of a volume (which could happen to be the part that's on an SSD, via LVM) and the files in another? That would presumably solve the problem, but I don't know of ways to hint ext3 like that.

    Read the article

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -evv jre-6u20-linux-i586.rpm D: opening db environment /var/lib/rpm/Packages joinenv D: opening db index /var/lib/rpm/Packages rdonly mode=0x0 D: locked db index /var/lib/rpm/Packages D: opening db index /var/lib/rpm/Name rdonly mode=0x0 error: package jre-6u20-linux-i586.rpm is not installed D: closed db index /var/lib/rpm/Name D: closed db index /var/lib/rpm/Packages D: closed db environment /var/lib/rpm/Packages [root@host java]# rpm -qi --scripts jre-6u20-linux-i586.rpm package jre-6u20-linux-i586.rpm is not installed [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • RAID 5 - DELL 2850 and others

    - by Kiara
    I have installed Ubuntu on a DELL 2850 and I have configured an array of 5 disks (SCSI 73GB 10K) in RAID5. I wanted to simulate a drive error so in the middle of something I just took one of the drives out and put it back again after a bit. Then the drive shows an orange light and seems to be rebuilding but actually is taking hours and hours with no results. So I went to the PERC utility (Ctrl+M) and the disk shows "REBLD". But it never gets to an online state. So I went to Objects - Physical drives - Rebuilding - View rebuild process. And in here I can see a bar moving from 0%... but if I reboot before finishing and get into the PERC Utility again, it seems to start again rebuilding from 0% - so it is not rebuilding automatically. My concern is: what would happen in a real situation? Do I have to just switch the server off and go to the Perc utility to start the rebuilding manually? I thought the whole point was to have this done automatically and without the need of stopping the server. Or does it perhaps rebuild automatically indeed but needs to have enough time without rebooting because otherwise the rebuilding process will start from scratch? It seems to take more than 3h for a 73gb disk! My second question is: can I mix then hard drives? So if I have a RAID of 5x73GB 10K can I use different size (146GB) or speeds (15K)? Apparently someone said it is OK in here Poweredge 2850: replace disk with larger in RAID?

    Read the article

  • Uknown nginx Error Messages

    - by Sparsh Gupta
    Hello, I am getting some nginx errors as I can see them in my error.log which I am unable to understand. They look like: ERRORS: 2011/03/13 21:48:21 [crit] 14555#0: *323314343 open() "/usr/local/nginx/proxy_temp/0/95/0000000950" failed (13: Permission denied) while reading upstream, client: XX.XX.XX.XX, server: , request: "GET /abc.jpg 2 HTTP/1.0", upstream: "http://192.168.162.141:80/abc.jpg", host: "example.com", referrer: "http://domain.com" 2011/03/13 22:00:07 [crit] 14552#0: *324171134 open() "/usr/local/nginx/proxy_temp/1/95/0000000951" failed (13: Permission denied) while reading upstream, client: XX.XX.XX.XY, server: , request: "GET mno.png HTTP/1.1", upstream: "http://192.168.162.141:80/mno.png", host: "example.com", referrer: "http://domain2.com" I also looked at these locations but found that there is no file by this name. root@li235-57:/var/log/nginx# /usr/local/nginx/proxy_temp/1/ 00/ 01/ 02/ 03/ 04/ 05/ 06/ 07/ 08/ 09/ 10/ 11/ 12/ 13/ 14/ 15/ 16/ 17/ 18/ 19/ 20/ 21/ 22/ 23/ 24/ 25/ 26/ 27/ 28/ 29/ 30/ 31/ 32/ 33/ 34/ 35/ 36/ 37/ root@li235-57:/var/log/nginx# ls /usr/local/nginx/proxy_temp/0/ 01/ 02/ 03/ 04/ 05/ 06/ 07/ 08/ 09/ 10/ 11/ 12/ 13/ 14/ 15/ 16/ 17/ 18/ 19/ 20/ 21/ 22/ 23/ 24/ 25/ 26/ 27/ 28/ 29/ 30/ 31/ 32/ 33/ 34/ 35/ 36/ 37/ Can someone help me whats going on / how can I debug this more and better fix this Thanks

    Read the article

  • ZFS + FreeBSD + virtualbox

    - by John
    Hi, I'm configuring a FreeBSD server hosting virtualbox serving half dozen mission critical busy mail servers. I just learned ZFS, I'm quite attracted, but have a few questions: what is the CPU overhead of ZFS? I googled and found little (or no) benchmark for that. from what I learned, when ZFS updates files, it keeps the old file as snapshot, and write the updated part for the new version. However that would mean for each snapshot it keeps that require significant storage overhead. How much is this storage overhead? For example, suppose I have 2TB usable space, how much space can actually be used for the latest version of files one year later? is FreeBSD with ZFS hosting virtualbox serving half dozen busy guest mission critical mail servers a reasonable combination? Anything particular to be careful with? And can I still choose ZFS for the guest OSs? This is because I may build another identical such box for redundancy, and will need to do some mirroring between each pair of the guest systems across the boxes. I'm trying to configure a Dell R710 for this. From what I learned, I shouldn't choose any RAID at all, is that true? In that case, are the drives still arrive hot swappable? this may sounds a bit pathetic, but since I have no experience with ZFS at all, and this is a mission critical server, so just ask just in case: I'm choosing twin Intel L5630 processors, and 6 x 600GB 15K RPM Serial-Attach SCSI drives. If I need more space in the future, I would just hot swap some drivers with larger capacity to expand the storage. There is no problem with these, right?

    Read the article

  • VMWare Converter - Remote Linux Install P2V

    - by Zed Said
    I am trying to get what I thought was an easy process working. Here is my situation. I have a remote linux server (running Debian) that I would like to turn into a VM so that I can do testing on it rather than the production server. I downloaded VMWare Converter. I went to the "convert machine" wizard, chose Powered-on machine, entered my remote machine details. Clicking next shows that it correctly connects to my remote linux box. Now, on the next screen, for Destination type, it enters "WMWare Infrastructure virtual machine", and I can't change this. This is where I am stuck. Why do I need another sever to convert with? Is this where my VM gets sent to? And why can't I just convert the VM and save it on the computer that I am running the converter program on? If I do have to use a vmware infrastructure destination, I am confused at what ESX is, why and how I use it. Any help with this process would be more than appreciated!

    Read the article

  • Ubuntu Hardy : Testing for environment variables in udev rules doesn't seem to work

    - by Fred
    I have a Ubuntu 8.04 LTS (server edition), and I need to write a udev rule for it to act upon plugging a USB thumb drive. However, I need a different action depending on the filesystem of the drive. I know I can use the ID_FS_TYPE environment variable to check for the filesystem on the drive. Following instructions found here, I try a dummy udev rule as such : KERNEL!="sd[a-z][0-9]", GOTO="my_udev_rule_end" ACTION=="add", RUN+="/usr/bin/touch /tmp/test_udev_%E{ID_FS_TYPE}" ACTION=="add", ENV{ID_FS_TYPE}=="vfat", RUN+="/usr/bin/touch /tmp/test_udev_it_works" LABEL="my_udev_rule_end" However, when I plug in a thumb drive with a vfat filesystem (which should trigger both rules), I end up with a file called /tmp/test_udev_vfat, meaning the first rule was triggered successfully, and that the ID_FS_TYPE environment variable is "vfat", but I don't have the other file, meaning that although I know the ID_FS_TYPE env variable is "vfat", I can't seem to check against it for a match. I tried googling the thing, but pretty much every result seems to assume ENV{ID_FS_TYPE}=="vfat" works. I also tested the exact same udev rule on Ubuntu 10.04 LTS server, and I have the same result. I'm probably missing something very simple, but I just don't get it. Does anyone see what is wrong with my udev rule that would prevent it from matching on ENV{ID_FS_TYPE}? Thanks.

    Read the article

  • PAM Winbind Expired Password

    - by kernelpanic
    We've got Winbind/Kerberos setup on RHEL for AD authentication. Working fine however I noticed that when a password has expired, we get a warning but shell access is still granted. What's the proper way of handling this? Can we tell PAM to close the session once it sees the password has expired? Example: login as: ad-user [email protected]'s password: Warning: password has expired. [ad-user@server ~]$ Contents of /etc/pam.d/system-auth: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth sufficient pam_winbind.so use_first_pass auth required pam_deny.so account [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 account sufficient pam_succeed_if.so user ingroup AD_Admins debug account requisite pam_succeed_if.so user ingroup AD_Developers debug account required pam_access.so account required pam_unix.so broken_shadow account sufficient pam_localuser.so account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account [default=bad success=ok user_unknown=ignore] pam_winbind.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password sufficient pam_winbind.so use_authtok password required pam_deny.so session [default=2 success=ignore] pam_succeed_if.so quiet uid >= 10000000 session sufficient pam_succeed_if.so user ingroup AD_Admins debug session requisite pam_succeed_if.so user ingroup AD_Developers debug session optional pam_mkhomedir.so umask=0077 skel=/etc/skel session optional pam_keyinit.so revoke session required pam_limits.so session optional pam_mkhomedir.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

  • So Close: How to get this SSH login working (.bashrc)

    - by This_Is_Fun
    Objective: SSH login ( + eliminate warning message) / run 2 commands / stay logged in: EDIT: Oops, I made a mistake (see below): This code does ~95% of what I wanted to do # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] 2> /dev/null "cd /var; ls; /bin/bash -i"' Now, after successful login / verify user logged in = root pts/0 2011-01-30 22:09 Try to 'logout' = bash: logout: not login shell: use `exit' I seem to have full root access w/o being logged into the shell? (The " /bin/bash -i " was added to 'Stay logged in' but doesn't work quite as expected) FYI: The question is "How to get this SSH login working" & it is mostly solved, sorry I made a mess... ... .. . Original Question Here: # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] "cd /var; ls; /bin/bash -i"' # (hack) Hide "map back to the address - POSSIBLE BREAK-IN ATTEMPT!" message. alias gr='ssh -p 5xx4x [email protected] 2> /dev/null' Both examples 'work' as shown; When I try to add the ' 2 /dev/null ' to the first example, then the whole thing breaks. I'm out of time trying to solve the warning message other ways, so is it possible to combine both examples to make example #1 work w/o the warning message? Thank you. ps. If you also know a proper way to kill the login warning message, please do tell (the 'standard' "edit host file" advice isn't working for me)

    Read the article

  • Windows Authentication behaves oddly when VPN'd

    - by Dan F
    Hi all We've got a few apps that rely on windows authentication - a couple of web apps with AD auth turned on and we usually connect to our SQL servers with windows auth. This normally runs without a hitch. It doesn't work so well if we're VPN'd to a client site though. SSMS Opening SSMS normally from the start menu, then picking a server that normally accepts windows auth, results in a message saying: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (.Net SqlClient Data Provider) If I drop to a command prompt and use runas /user:domain\user to launch SSMS I can successfully windows auth to our SQL server instances with that ssms process. If I look in task manager, both copies of ssms.exe (start menu vs runas) have the same user, and I can see no discernible differences between the processes in procexp. AD Auth websites If I open IE and browse to any of our websites that require an authenticated windows user, I get the "who are you" prompt, and that dialog thinks I'm whoever the VPN user is. I can click "Use another account" and authenticate that way though. Outlook Even Outlook prompts for a username when we are VPN'd! It's affecting our Win7 and Vista machines. It's been a while since we had an XP box, but I don't recall having this issue on XP for what it's worth. The VPN connections are just using the built in windows VPN connections, they're not fancy cisco VPNs or anything of that nature. Does anyone know how to tell windows that I'd like to be my normal old primary domain user rather than the VPN user when authenticating to resources in our domain? Heck, I'd be happy with a solution that prompted me with the "who are you" if I was trying to access windows auth requiring resources on the client's VPN. Thanks! Apologies if this is more a superuser question, I wasn't sure which site it best suited. It's about networking and infrastructure and plagues all of our developers here, so I hope it's a serverfault Q.

    Read the article

  • getting "No LoginModules configured" for JAAS login under WebSphere security domain

    - by user1739040
    I have a JAX-RPC web service running on WebSphere V7. It requires a UserNameToken for security. I have a custom login module (MyLoginModule) which extracts the username and password, and that module is defined as a JAAS application login in the websphere admin console. Using IBM RAD 8.0, I have bound the token consumer to the login module using the JAAS config name of the module. This all works fine and happy on my development server. Now I realize, that for deployment to another server, I am required to move the JAAS login from global security to a security domain. When I do that, it breaks my web service. I get this SOAP Fault message: com.ibm.wsspi.wssecurity.SoapSecurityException: WSEC6520E: Construction of the login context failed. The exception is : javax.security.auth.login.LoginException: No LoginModules configured for MyLoginModule According to the IBM docs: The JAAS application logins, the JAAS system logins, and the JAAS J2C authentication data aliases can all be configured at the domain level. By default, all of the applications in the system have access to the JAAS logins configured at the global level. The security runtime first checks for the JAAS logins at the domain level. If it does not find them, it then checks for them in the global security configuration. Configure any of these JAAS logins at a domain only when you need to specify a login that is used exclusively by the applications in the security domain. So I am looking to make sure my application is in the domain, and I have tried everything I can think of. (I have assigned the domain to "all scopes", to the entire cell, etc.) No luck, I keep getting the same error response to my web service client. Any help or hints are appreciated.

    Read the article

  • EWS connect to ExchangeServer authentication specifications

    - by dankyy1
    Hi all I'm connecting to ExchangeServer with username,password,doain properities(my code below) but what how to define server uses Kerberos,ntlm or basic authentication e.g? thnx xchangeServiceBinding binding = new ExchangeServiceBinding(); ServicePointManager.ServerCertificateValidationCallback = CertificateValidationCallBack; System.Net.WebProxy proxyObject = new System.Net.WebProxy(); proxyObject.Credentials = System.Net.CredentialCache.DefaultCredentials; if (string.IsNullOrEmpty(credentials.UserName) || string.IsNullOrEmpty(credentials.Password) || string.IsNullOrEmpty(credentials.Domain)) throw new ArgumentNullException("The Crediantial values could not be null or empty."); binding.Credentials = new NetworkCredential(credentials.UserName, credentials.Password, credentials.Domain); if (string.IsNullOrEmpty(serverURL)) throw new ArgumentNullException("The Exchange server Url could not be null or empty."); binding.Url = serverURL; binding.UseDefaultCredentials = true; binding.Proxy = proxyObject; //TO DO:take version over parameter..or configration!! binding.RequestServerVersionValue = new RequestServerVersion(); binding.RequestServerVersionValue.Version = (ExchangeVersionType)Enum.Parse(typeof(ExchangeVersionType), serverVersion);// ExchangeVersionType.Exchange2007_SP1;//.Exchange2010;

    Read the article

  • nginx+passenger +static websites= problems

    - by Eugene K
    I've got a Rails app that nginx serves through passenger. I'd also like to serve some static content for a different domain name. But when I add another server block to my config, both websites become unavailable returning HTTP 204. What have I done wrong? What can I do to fix it? Here's the http block of my nginx.conf: https://gist.github.com/4243256 Here's what I added : server { listen 80; server_name website2; root /var/www/website2; location / { index index.html; } } It's going to be a Rails app as well at some point in the future (though I'm not really sure about that, maybe I'm going to use a different back-end solution.) Either way, I don't want anything dynamic to eat away the resources just yet. As of now, this website consists of nothing but index.html and stylesheet.css files in the root directory. What should I do? Thank you in advance. Sincerely yours, Eugene.

    Read the article

  • Wildcard DNS, VirtualHosts on apache2, 404 for unused subdomains

    - by niel
    On an Apache2 server linked to by a DNS that includes a wildcard entry, e.g. *.example.com, subdomains that are not defined as ServerNames in any VirtualHosts point to the first defined VirtualHost, in my example this is 000-default. My Question:How would one get unused subdomains (subdomains not used in any virtualhosts) to return a 404 error to the requesting client? This must preferably show in server logs as a 404 as well. I have looked into the following possibilities: Redirecting any invalid subdomain to the home page or some other page.The problem with this method is, when someone links to your site as this.company.sucks.example.com, the client will see your home page or in my case 000-default if I do not redirect. Thanks, to Mike for pointing this out. (regex for "suck", etc definately not an option) Let the default VirtualHost point to a non-existent directory.Apache does not like this one bit, warning with every reload. Beyond the warning, everything seems fine. This seems like a hack. Does this seem like a problem (however small) to anyone? Point the default VirtualHost to a folder where the index.php is forbidden, thus creating a 403 status code.This is confusing and makes things like the following overly complicated: Say, for example, you use a subdomain per user (a big reason to use wildcard DNS, apparently), and users have the ability to view each others profiles at username.example.com. This solution is confusing to the user and completely not what I want to do. My ideal sollution will let the user know there is nothing to view at the url he entered. Preferably with a 404 and an error log entry for the address entered (not some other address). Any help would be greatly appreciated!

    Read the article

< Previous Page | 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437  | Next Page >