Search Results

Search found 24624 results on 985 pages for 'linux rrt'.

Page 309/985 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Copying files SSH vs sFTP

    - by jackquack
    I'm a bit of a unix noob, but this question seems super basic, yet I can't find an answer anywhere. Basically, to my knowledge, sFTP is just FTP over ssh. So, why can't I drag and drop files from one folder to another on the server side like I can on ssh. Why when I want to unzip a .tar in a server folder, does it first want to copy it to my machine and then back? Why can't it just unzip like it can when I'm using the command line. I know that when I use the command line it is using the resources of the remote machine, but why can't sFTP do that too? Is there a way to execute commands which I would normally do over SSH, but in a gui? I'm tried mapping to the drive to my own machine, I've tried so many sFTP clients that it's silly. Is there another class of program that I just don't know of?

    Read the article

  • IP tables gateway

    - by WojonsTech
    I am trying to make an iptables gateway. I ordered 3 dedicated server from my hosting company all with dual nics. One server has been given all the ip addresses and is connected directly to the internet and has its other nic connected to a switch where the other servers are all connected also. I want to setup iptables so for example the ip address 50.0.2.4 comes into my gateway server it fowards all the traffic to a private ip address using the second nic. This way the second nic can do what ever it needs and can respond back also. I also want it setup that if any of the other servers needs to download anything over the inernet it is able to do so and by using the same ip address that is used for its incomming traffic. Lastly I would like to be able to setup dns and other needed networking stuff that i maybe not thinking about.

    Read the article

  • Where is xorg.conf in Ubuntu 10.04?

    - by Mikey.B
    Hi Guys, I'm in the middle of trying to setup dual monitors on ubuntu and would like to backup my xorg.conf... The documentation I've been thus far say to do the following: sudo cp /etc/X11/xorg.conf /etc/X11/xorg.conf_backup But I don't see the xorg.conf file anywhere... Am I missing something? Where is this located?

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • What advantage to I have if I use 64bit libraries?

    - by RadiantHex
    Hi folks, I see many people go crazy about 64bit libraries, and preferring them in general to the 32bit counter parts. I realise there is a lot of talk that gets lost in translation, and that the 64bit can be often over-valued. The setting is libraries that are called on web application, I'm aware that a new instance of the web app is generated for each hit. Therefore I'm thinking that 64bit is not necessary as the instances in no way surpass 2Gb of RAM usage. Help would be much appreciated! :)

    Read the article

  • KVM/Libvirt bridged/routed networking not working on newer guest kernels

    - by SharkWipf
    I have a dedicated server running Debian 6, with Libvirt (0.9.11.3) and Qemu-KVM (qemu-kvm-1.0+dfsg-11, Debian). I am having a problem getting bridged/routed networking to work in KVM guests with newer kernels (2.6.38). NATted networking works fine though. Older kernels work perfectly fine as well. The host kernel is at version 3.2.0-2-amd64, the problem was also there on an older host kernel. The contents of the host's /etc/network/interfaces (ip removed): # Loopback device: auto lo iface lo inet loopback # bridge auto br0 iface br0 inet static address 176.9.xx.xx broadcast 176.9.xx.xx netmask 255.255.255.224 gateway 176.9.xx.xx pointopoint 176.9.xx.xx bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 up route add -host 176.9.xx.xx dev br0 # VM IP post-up mii-tool -F 100baseTx-FD br0 # default route to access subnet up route add -net 176.9.xx.xx netmask 255.255.255.224 gw 176.9.xx.xx br0 The output of ifconfig -a on the host: br0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 inet addr:176.9.xx.xx Bcast:176.9.xx.xx Mask:255.255.255.224 inet6 addr: fe80::5604:a6ff:fe8a:6613/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:20216729 errors:0 dropped:0 overruns:0 frame:0 TX packets:19962220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14144528601 (13.1 GiB) TX bytes:7990702656 (7.4 GiB) eth0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:26991788 errors:0 dropped:12066 overruns:0 frame:0 TX packets:19737261 errors:270082 dropped:0 overruns:0 carrier:270082 collisions:1686317 txqueuelen:1000 RX bytes:15459970915 (14.3 GiB) TX bytes:6661808415 (6.2 GiB) Interrupt:17 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6240133 errors:0 dropped:0 overruns:0 frame:0 TX packets:6240133 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6081956230 (5.6 GiB) TX bytes:6081956230 (5.6 GiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:79:e4:5a inet addr:192.168.100.1 Bcast:192.168.100.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:225016 errors:0 dropped:0 overruns:0 frame:0 TX packets:412958 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16284276 (15.5 MiB) TX bytes:687827984 (655.9 MiB) virbr0-nic Link encap:Ethernet HWaddr 52:54:00:79:e4:5a BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vnet0 Link encap:Ethernet HWaddr fe:54:00:93:4e:68 inet6 addr: fe80::fc54:ff:fe93:4e68/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:607670 errors:0 dropped:0 overruns:0 frame:0 TX packets:5932089 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:83574773 (79.7 MiB) TX bytes:1092482370 (1.0 GiB) vnet1 Link encap:Ethernet HWaddr fe:54:00:ed:6a:43 inet6 addr: fe80::fc54:ff:feed:6a43/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:922132 errors:0 dropped:0 overruns:0 frame:0 TX packets:6342375 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:251091242 (239.4 MiB) TX bytes:1629079567 (1.5 GiB) vnet2 Link encap:Ethernet HWaddr fe:54:00:0d:cb:3d inet6 addr: fe80::fc54:ff:fe0d:cb3d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9461 errors:0 dropped:0 overruns:0 frame:0 TX packets:665189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:4990275 (4.7 MiB) TX bytes:49229647 (46.9 MiB) vnet3 Link encap:Ethernet HWaddr fe:54:cd:83:eb:aa inet6 addr: fe80::fc54:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1649 errors:0 dropped:0 overruns:0 frame:0 TX packets:12177 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:77233 (75.4 KiB) TX bytes:2127934 (2.0 MiB) The guest's /etc/network/interfaces, in this case running Ubuntu 12.04 (ip removed): # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 176.9.xx.xx netmask 255.255.255.248 gateway 176.9.xx.xx # Host IP pointopoint 176.9.xx.xx # Host IP dns-nameservers 8.8.8.8 8.8.4.4 The output of ifconfig -a on the guest: eth0 Link encap:Ethernet HWaddr 52:54:cd:83:eb:aa inet addr:176.9.xx.xx Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::5054:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14190 errors:0 dropped:0 overruns:0 frame:0 TX packets:1768 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2614642 (2.6 MB) TX bytes:82700 (82.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:954 errors:0 dropped:0 overruns:0 frame:0 TX packets:954 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:176679 (176.6 KB) TX bytes:176679 (176.6 KB) Output of ping -c4 on the guest: PING google.nl (173.194.35.151) 56(84) bytes of data. 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=1 ttl=55 time=14.7 ms From static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx): icmp_seq=2 Redirect Host(New nexthop: static.161.82.9.176.clients.your-server.de (176.9.82.161)) 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=2 ttl=55 time=15.1 ms From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=3 Destination Host Unreachable From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=4 Destination Host Unreachable --- google.nl ping statistics --- 4 packets transmitted, 2 received, +2 errors, 50% packet loss, time 3002ms rtt min/avg/max/mdev = 14.797/14.983/15.170/0.223 ms, pipe 2 The static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx) is the host's IP. I have encountered this problem with every guest OS I've tried, that being Fedora, Ubuntu (server/desktop) and Debian with an upgraded kernel. I've also tried compiling the guest kernel myself, to no avail. I have no problem with recompiling a kernel, though the host cannot afford any downtime. Any ideas on this problem are very welcome. EDIT: I can ping the host from inside the guest.

    Read the article

  • Virtual host “Forbidden You don't have permission to access / on this server” on debian

    - by ulduz114
    Before I created a virtual host I could see "http://localhost", but when I created a virtual host I could not see "http://localhost" and my virtual host "http://test" Here is my virtualhost config file: <VirtualHost test:80> ServerAdmin [email protected] ServerName test ServerAlias test DocumentRoot "/home/javad/Public/test/public" <Directory "/home/javad/Public/test/public/" > Options Indexes FollowSymLinks MultiViews ExecCGI DirectoryIndex index.php AllowOverride all Order allow,deny allow from all </Directory> </VirtualHost> so I ran a2ensite test and added 127.0.0.1 test to /etc/hosts file and restart apapche2 fine But after that I cannot access to http://test or even http://localhost i get Forbidden You don't have permission to access / on this server. When I delete my virtual host setting I can access http://localhost

    Read the article

  • multiple wildcard entries

    - by Murali
    my client has around 300,000 domains and they just have a wildcard for all of them * A 12.12.12.12 Now they want to create a sub domain that points to a different IP and still have the flexibility of wildcard, something like ww1.* A 24.24.24.24 * A 12.12.12.12 Looks like in BIND, the lower "*" is catch-all and taking over every query and hence ww1 is not working. One of solutions offered by IT folks was to create seperate 300K zones for just "ww1" and leave the "*" wildcard. Are there any other DNS software's that can achieve this task easily? Any other ways to deal?

    Read the article

  • configuring vsftpd anonymous upload. Creates files but freezes at 0 bytes

    - by Wayne
    vsftpd on ubuntu after sudo apt-get install vsftpd Then did configuration as in the attached /etc/vsftpd.conf file. Anonymous ftp allows cd to the upload directly and allows put myfile.txt which gets created on the server but then the client hangs and never proceeds. The file on the server remains at 0 bytes. Here's the folders and permissions: root@support:/home/ftp# ls -ld . drwxr-xr-x 3 root root 4096 Jun 22 00:00 . root@support:/home/ftp# ls -ld pub drwxr-xr-x 3 root root 4096 Jun 21 23:59 pub root@support:/home/ftp# ls -ld pub/upload drwxr-xr-x 2 ftp ftp 4096 Jun 22 00:06 pub/upload root@support:/home/ftp# Here's the vsftpd.conf file: root@support:/home/ftp# grep -v '#' /etc/vsftpd.conf listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES dirmessage_enable=YES xferlog_enable=YES anon_root=/home/ftp/pub/ connect_from_port_20=YES chown_uploads=YES chown_username=ftp nopriv_user=ftp secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Here's a file example that attempted to upload: root@support:/home/ftp/pub/upload# ls -l total 0 -rw------- 1 ftp nogroup 0 Jun 22 00:06 build.out This is the client attempting to upload...it is frozen at this point: $ ftp 173.203.89.78 Connected to 173.203.89.78. 220 (vsFTPd 2.0.6) User (173.203.89.78:(none)): ftp 331 Please specify the password. Password: 230 Login successful. ftp> put build.out 200 PORT command successful. Consider using PASV. 553 Could not create file. ftp> cd upload 250 Directory successfully changed. ftp> put build.out 200 PORT command successful. Consider using PASV. 150 Ok to send data.

    Read the article

  • Python error after installing libboost-all-dev on debian [migrated]

    - by Cameron Metzke
    A friend of mine wanted the liboost libraries installed on our shared computer so after installing libboost-all-dev 1.49.0.1 ( A debian wheezy machine ), I get this error when using the "pydoc modules" command on the commandline. It spits out the following error -- root@debian:/usr/include/c++/4.7# pydoc modules Please wait a moment while I gather a list of all available modules... **[debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 357 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 230 [debian:49065] [[INVALID],INVALID] ORTE_ERROR_LOG: A system-required executable either could not be found or was not executable by this user in file ../../../orte/runtime/orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed --> Returned value A system-required executable either could not be found or was not executable by this user (-127) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: orte_init failed --> Returned "A system-required executable either could not be found or was not executable by this user" (-127) instead of "Success" (0) -------------------------------------------------------------------------- *** The MPI_Init() function was called before MPI_INIT was invoked. *** This is disallowed by the MPI standard. *** Your MPI job will now abort. [debian:49065] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!** root@debian:/usr/include/c++/4.7# I tried looking into the problem and ended up uninstalling the following to get it to work again. openmpi common all 1.4.5-1 libibverbs-dev amd64 1.1.6-1 libopenmpi-dev amd64 1.4.5-1 mpi-default-dev amd64 1.0.1 libboost-mpi-python1.49.0 although pydoc works again, I'm assuming the packages I removed are gunna hurt somethiong else down the track ? As you guessed im not a c/c++ programmer. So I guess my question is, will this hurt something later ? is their a way to install those packages without hurting python ?

    Read the article

  • Network is going down once per day

    - by Charly
    Once per day the network on eth0 is going down and we need to do sudo ifdown eth0; sudo ifup eth0 to get the network up. Here is the syslog: Feb 11 12:48:01 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 12:52:35 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 12:56:23 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:00:28 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:04:29 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:09:16 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:13:53 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:18:16 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:22:25 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:26:52 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:30:44 www-tech-1 dhclient: DHCPREQUEST of address> on eth0 to 131.121.113.228 port 67 Feb 11 13:31:49 www-tech-1 dhclient: There is already a pid file /var/run/dhclient.eth0.pid with pid 3198 Feb 11 13:31:49 www-tech-1 dhclient: Listening on LPF/eth0/00:e0:81:49:fc:e0 Feb 11 13:31:49 www-tech-1 dhclient: Sending on LPF/eth0/00:e0:81:49:fc:e0 Feb 11 13:31:49 www-tech-1 dhclient: DHCPRELEASE on eth0 to 131.121.113.228 port 67 Feb 11 13:31:49 www-tech-1 dhclient: There is already a pid file /var/run/dhclient.eth0.pid with pid 134519072 Feb 11 13:31:50 www-tech-1 dhclient: Listening on LPF/eth0/00:e0:81:49:fc:e0 Feb 11 13:31:50 www-tech-1 dhclient: Sending on LPF/eth0/00:e0:81:49:fc:e0 Feb 11 13:31:52 www-tech-1 dhclient: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 Feb 11 13:31:52 www-tech-1 dhclient: DHCPREQUEST of 131.121.14.17 on eth0 to 255.255.255.255 port 67 Feb 11 13:31:53 www-tech-1 kernel: [265383.991682] eth0: no IPv6 routers present Please check the last portion of this syslog. Can anybody help me?

    Read the article

  • Permissions problem on mounting afp drive

    - by Ron Gejman
    I am trying to mount a network drive via AFP on an Ubuntu 10.04 server machine. After installing AFP support, I use the following command: sudo mount_afp afp://USER:[email protected]/directory/ /media/dir This seems to work and it tells me that mounting succeeded. However, when I navigate to /media/dir I get the following error: cd: cfs: Input/output error Permissions in /media are: d????????? ? ? ? ? dir/ drwx------ 12 user 4.0K 2010-10-25 16:08 otherdisk/ So there is a permissions problem here. I eventually want to mount this drive automatically using fstab. What do I need to do to make the disk accessible?

    Read the article

  • Failed to bring up eth1 in a dual ips solution in ubuntu

    - by lxyu
    I'm using ubuntu 12.04. I tried to assign two ips to two ethernet cards in my server. The content of /etc/network/interfaces is like this: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 114.80.156.a netmask 255.255.255.224 gateway 114.80.156.b auto eth1 iface eth1 inet static address 114.80.156.c netmask 255.255.255.240 gateway 114.80.156.d a b c d have different values, which means the two ips are in different vlans. But I can only bring up eth0 with this command: $ /etc/init.d/networking restart RTNETLINK answers: File exists Failed to bring up eth1. ...done. I have checked the question here which shows the same problem like the one I encountered: Can only bring up one of two interfaces But it seems it's not really solved. And in my situation, I need the 2 ips to use 2 different gateways. So how to fix this problem? Edit1, changed the example config ip from 192.168.0.0/16 subnet to another 'real' subnet. Edit2, the purpose of doing this is fairly simple. Because the ip range I previous in don't have more room for new servers, and I have to move to another ip range. So I want to make the public servers bind to 2 ips for the transition period. I only have really limited knowledge about routing and subnet. @BillThor @rackandboneman, would you please give me some keywords or links on how to setup route for 2 ips? and @Mike Pennington, how do you know I speak chinese?

    Read the article

  • Expanding globs in xargs

    - by Craig
    I have a directory like this mkdir test cd test touch file{0,1}.txt otherfile{0,1}.txt stuff{0,1}.txt I want to run some command such as ls on certain types of files in the directory and have the * (glob) expand to all possibilities for the filename. echo 'file otherfile' | tr ' ' '\n' | xargs -I % ls %*.txt This command does not expand the glob and tries to look for the literal 'file*.txt' How do I write a similar command that expands the globs? (I want to use xargs so the command can be run in parallel)

    Read the article

  • Varnish "FetchError no backend connection" error

    - by clueless-anon
    Varnishlog: 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829925 1.0 12 SessionOpen c 79.124.74.11 3063 :80 12 SessionClose c EOF 12 StatSess c 79.124.74.11 3063 0 1 0 0 0 0 0 0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829928 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829931 1.0 12 SessionOpen c 108.62.115.226 46211 :80 12 ReqStart c 108.62.115.226 46211 467185881 12 RxRequest c GET 12 RxURL c / 12 RxProtocol c HTTP/1.0 12 RxHeader c User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/) 12 RxHeader c Host: www.mysite.com 12 VCL_call c recv lookup 12 VCL_call c hash 12 Hash c / 12 Hash c www.mysite.com 12 VCL_return c hash 12 VCL_call c miss fetch 12 FetchError c no backend connection 12 VCL_call c error deliver 12 VCL_call c deliver deliver 12 TxProtocol c HTTP/1.1 12 TxStatus c 503 12 TxResponse c Service Unavailable 12 TxHeader c Server: Varnish 12 TxHeader c Content-Type: text/html; charset=utf-8 12 TxHeader c Retry-After: 5 12 TxHeader c Content-Length: 418 12 TxHeader c Accept-Ranges: bytes 12 TxHeader c Date: Wed, 27 Jun 2012 20:45:31 GMT 12 TxHeader c X-Varnish: 467185881 12 TxHeader c Age: 1 12 TxHeader c Via: 1.1 varnish 12 TxHeader c Connection: close 12 Length c 418 12 ReqEnd c 467185881 1340829931.192433119 1340829931.891024113 0.000051022 0.698516846 0.000074035 12 SessionClose c error 12 StatSess c 108.62.115.226 46211 1 1 1 0 0 0 256 418 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829934 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1340829937 1.0 netstat -tlnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 3086/nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1915/varnishd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1279/sshd tcp 0 0 127.0.0.2:25 0.0.0.0:* LISTEN 3195/sendmail: MTA: tcp 0 0 127.0.0.2:6082 0.0.0.0:* LISTEN 1914/varnishd tcp 0 0 127.0.0.2:9000 0.0.0.0:* LISTEN 1317/php-fpm.conf) tcp 0 0 127.0.0.2:3306 0.0.0.0:* LISTEN 1192/mysqld tcp 0 0 127.0.0.2:587 0.0.0.0:* LISTEN 3195/sendmail: MTA: tcp 0 0 127.0.0.2:11211 0.0.0.0:* LISTEN 3072/memcached tcp6 0 0 :::8080 :::* LISTEN 3086/nginx tcp6 0 0 :::80 :::* LISTEN 1915/varnishd tcp6 0 0 :::22 :::* LISTEN 1279/sshd /etc/nginx/site-enabled/default server { listen 8080; ## listen for ipv4; this line is default and implied listen [::]:8080 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm index.php; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.2; deny all; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.2:9000; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } /etc/nginx/sites-enabled/www.mysite.com.vhost server { listen 8080; server_name www.mysite.com mysite.com.net; root /var/www/www.mysite.com/web; if ($http_host != "www.mysite.com") { rewrite ^ http://www.mysite.com$request_uri permanent; } index index.php index.html; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } # Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac). location ~ /\. { deny all; access_log off; log_not_found off; } location / { try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; location ~* \.(jpg|jpeg|png|gif|css|js|ico)$ { expires max; log_not_found off; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.2:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } include /var/www/www.mysite.com/web/nginx.conf; location ~ /nginx.conf { deny all; access_log off; log_not_found off; } } /etc/varnish/default.vcl # This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.2"; .port = "8080"; # .connect_timeout = 600s; #.first_byte_timeout = 600s; # .between_bytes_timeout = 600s; # .max_connections = 800; Note: uncommenting the last four options at default.vcl made no difference. cat /etc/default/varnish # Configuration file for varnish # # /etc/init.d/varnish expects the variables $DAEMON_OPTS, $NFILES and $MEMLOCK # to be set from this shell script fragment. # # Should we start varnishd at boot? Set to "yes" to enable. START=yes # Maximum number of open files (for ulimit -n) NFILES=131072 # Maximum locked memory size (for ulimit -l) # Used for locking the shared memory log in memory. If you increase log size, # you need to increase this number as well MEMLOCK=82000 # Default varnish instance name is the local nodename. Can be overridden with # the -n switch, to have more instances on a single server. INSTANCE=$(uname -n) # This file contains 4 alternatives, please use only one. ## Alternative 1, Minimal configuration, no VCL # # Listen on port 6081, administration on localhost:6082, and forward to # content server on localhost:8080. Use a 1GB fixed-size cache file. # # DAEMON_OPTS="-a :6081 \ # -T localhost:6082 \ # -b localhost:8080 \ # -u varnish -g varnish \ # -S /etc/varnish/secret \ # -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" ## Alternative 2, Configuration with VCL # # Listen on port 6081, administration on localhost:6082, and forward to # one content server selected by the vcl file, based on the request. Use a 1GB # fixed-size cache file. # DAEMON_OPTS="-a :80 \ -T 127.0.0.2:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G" If you need any other info let me know. I am all out of clue as to whats the problem.

    Read the article

  • RAID FS detection at boot time

    - by alex
    An excerpt from dmesg: md: Autodetecting RAID arrays. md: Scanned 2 and added 2 devices. md: autorun ... md: considering sdb1 ... md: adding sdb1 ... md: adding sda1 ... md: created md1 md: bind<sda1> md: bind<sdb1> md: running: <sdb1><sda1> raid1: raid set md1 active with 2 out of 2 mirrors md1: detected capacity change from 0 to 1500299198464 md: ... autorun DONE. md1: unknown partition table EXT3-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT2-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT4-fs (md1): mounted filesystem with ordered data mode Is it OK that kernel tries to mount an ext4 raid as ext3, ext2 first? Is there a way to tell it to skip those two steps? Just in case: /dev/md1 / ext4 noatime 0 1 TIA.

    Read the article

  • Sendmail Sends but never Delivers

    - by Jeremy
    I have tried 10 different emails hosted at Google, Yahoo!, GoDaddy, and some that are privately hosted, and each time I get the following errors. I have blocked sensitive information, but you will be able to see the errors. Feb 16 17:06:50 xxxxx sendmail[31824]: o1GM6ovJ031824: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30054, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GM6oJo031825 Message accepted for delivery) Feb 16 16:54:19 xxxxx sendmail[31625]: o1GLsJPP031625: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30097, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GLsJah031626 Message accepted for delivery) Feb 17 09:05:52 xxxxx sm-mta[10620]: o1H6Z3jM005734: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=07:30:49, xdelay=01:15:36, mailer=esmtp, pri=571331, relay=aspmx3.googlemail.com. [209.85.222.4], dsn=4.0.0, stat=Deferred: Connection timed out with aspmx3.googlemail.com. Feb 17 10:35:23 xxxxx sm-mta[12828]: o1HEZwn8011833: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:59:25, xdelay=00:12:36, mailer=esmtp, pri=300353, relay=aln-mailrelay.att.net. [12.102.252.75], dsn=4.0.0, stat=Deferred: Connection timed out with aln-mailrelay.att.net. If you take a look, they all send, but then (HOURS later) I get an error "stat=Deferred: Connection timed out with {server}". I'm at my wits end, because I use this same setup on each of my servers, and they all work.

    Read the article

  • slow pppoe connection using Ubuntu 9.10

    - by Radu
    I have a Compaq Presario CQ61, instaled Ubuntu 9.10 and Windows 7 on it. It works great except the PPPoE connection in Ubuntu, when i dial in Windows my download speed reach up to 91 Mb, rebooted in Ubuntu, downloaded same file from the same server with a speed of maximum 3 Mb, cheked in Windows again 80 - 90 Mb constant. I can't figure what slow's the internet connection in Ubuntu. Anyone has an ideea on this problem ? (NO iptables configured, NO HTB, CBQ ...etc configured) . Thank you

    Read the article

  • Wget works, Ping doesn't

    - by derty
    There are some anomalies on a Virtuozzo virtualized Debian 4 (I know, I'm gonna upgrade this one asap, but there dependences). We run some Websites on this one. And a view Days ago exmi4 wasnt able to send mails to SOME people. I'll use live.com as exampledomain! So some of this people got mails and some didn't. Some of the mails got stuck in the queue, and after 2 days they went out!! My Nagios never showed problems with the internet connection or disk space Now i wanted to install "dig" to look how he's solving the dns request. And this Debian tells me he doesn't know dig.. Long story made short, Debian is able to download sites with exact IP or even with wget live.com, but it is not able to ping live.com. I'm 99% sure that the networking is right and the routing too! Some examples of my tring below: wget live.com downloads the site ping live.com ping http://www.live.com ping http://live.com returns: ping: unknown host live.com EDIT: i now use heise.de not live.com any more. and i found out i can ping the heise.de server by using it's IP-address. myserver:~# ping 193.99.144.85 PING 193.99.144.85 (193.99.144.85) 56(84) bytes of data. 64 bytes from 193.99.144.85: icmp_seq=1 ttl=248 time=12.7 ms 64 bytes from 193.99.144.85: icmp_seq=2 ttl=248 time=12.6 ms 64 bytes from 193.99.144.85: icmp_seq=3 ttl=248 time=12.9 ms 64 bytes from 193.99.144.85: icmp_seq=4 ttl=248 time=13.1 ms 64 bytes from 193.99.144.85: icmp_seq=5 ttl=248 time=13.1 ms --- 193.99.144.85 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 12.671/12.924/13.163/0.238 ms EDIT 2: myserver:/etc/apt# dig heise.de ; <<>> DiG 9.3.4-P1.2 <<>> heise.de ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 3 ;; QUESTION SECTION: ;heise.de. IN A ;; ANSWER SECTION: heise.de. 2266 IN A 193.99.144.80 ;; AUTHORITY SECTION: heise.de. 1622 IN NS ns.pop-hannover.de. heise.de. 1622 IN NS ns.s.plusline.de. heise.de. 1622 IN NS ns.plusline.de. heise.de. 1622 IN NS ns2.pop-hannover.net. heise.de. 1622 IN NS ns.heise.de. ;; ADDITIONAL SECTION: ns.plusline.de. 265 IN A 212.19.48.14 ns.pop-hannover.de. 5113 IN A 193.98.1.200 ns2.pop-hannover.net. 15150 IN A 62.48.67.66 ;; Query time: 2 msec ;; SERVER: 193.200.112.80#53(193.200.112.80) ;; WHEN: Tue Oct 9 13:03:50 2012 ;; MSG SIZE rcvd: 216

    Read the article

  • What is the simplest way to build your own .deb package?

    - by Calvin Fisher
    Having used Ubuntu for several years now, I've assembled a short list of scripts and packages that I always install on my computers. I would like to pack them up into a .deb to make it easier to get set up on a fresh OS installation. I'm imagining, for instance, one package that would install all of my custom BASH scripts that I've made for common tasks, and another one that would depend on other packages (like w64codecs) that I always install but forget that I need to until I go to do something and it's not there. It doesn't even have to be by-the-book; I'm not looking to deploy these publicly. I'm just looking to roll up all these tasks into one sudo dpkg --install. To quantify "simple" or "easy," I mean to say that I'm looking for the method with the fewest steps requiring the least technical knowledge and, most importantly, taking the least time.

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • Why does terminal prompt text becomes black when connecting to Ubuntu with Nomachine NX client?

    - by David B
    I'm using Nomachine NX client for Windows to remotely connect to my Ubuntu. Every now and then I experience a strange phenomena: the text in the prompt of all open terminal windows becomes black, so it can't be viewed over the black background. Typed commands are also black, but the results are in normal colors. So I can run stuff, but can't see what I'm typing... After I close all open terminal windows and start a new terminal window, everything goes back to normal. This is rerally annoying and happens quite often. Any idea why?

    Read the article

  • httpd dead but subsys locked

    - by McShark
    Hello, I modified today max_execution_time in php.ini, when I restarted the server, I get this error : Stopping httpd: [FAILED] Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs I killed httpd proc : killall httpd, and started it fine, but the I can't open any web site on the server. service httpd status OUTPUT : httpd dead but subsys locked I removed httpd file from /var/lock/subsys/ :S Same problem. Please Help!

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >