Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 401/1113 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • traffic shaping for certain (local) users

    - by JMW
    Hello, i'm using ubuntu 10.10 i've a local backup user called "backup". :) i would like to give this user just a bandwidth of 1Mbit. No matter which software wants to connect to the network. this solution doesn't work: iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 iptables -t mangle -A POSTROUTING -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 2 htb default 1 tc filter add dev eth0 parent 2: protocol ip pref 2 handle 50 fw classid 2:6 tc class add dev eth0 parent 2: classid 2:6 htb rate 10Kbit ceil 1Mbit tc qdisc show dev eth0 tc class show dev eth0 tc filter show dev eth0 does anyone know how to do it? thanks a lot in advance

    Read the article

  • samba4 not building in Arch

    - by kmplsv
    cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` /bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8 for I in manpages/*.8; do \ /bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \ done /bin/install: cannot stat `manpages/*.8': No such file or directory make: *** [installdocs] Error 1 Aborting... ==> ERROR: Makepkg was unable to build samba4. ==> Restart building samba4 ? [y/N] ==> ------------------------------- ==>c Any ideas as what is causing my build to fail? I assume it's an issue with manpages I can't figure out exactly what package it is looking for that I don't have.

    Read the article

  • squid running out of sockets

    - by drscroogemcduck
    I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine. if i run the following command where port 82 is the port my squid server is running on: netstat -ann | grep 82 | wc -l i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.

    Read the article

  • Ubuntu 12, limit the resolution to 640x480

    - by TimothyP
    How can I set and limit the resolution in Ubuntu 12 to 640x480 There's not much in the xorg.conf file anymore, so I'm guessing this is no longer the place to do it? I can't do it using the GUI either because it doesn't show me the 640x480 option. While setting the resolution the computer is connected to a normal screen but later it will be connected to a screen that only supports 640x480 and doesn't report its supported modes to the computer. The only thing in my xorg.conf (by default) is this: Section "Device" Identifier "Default Device" Option "NoLogo" "True" EndSection

    Read the article

  • Routing RFC1918 addresses through dd-wrt via a switch

    - by espenfjo
    I am a bit stuck with an experiment of mine. I have a network looking somewhat like this. | Internet | | ---- |Switch| ---- | | Server w/pub IP | DD-WRT router 192.168.1.1 | | RFC1918 clients 192.168.1.0/24 What I want is for the RFC1918 clients to speak directly with each others. On the server with the public IP I have this route: 192.168.1.0/24 dev eth0 scope link and can see that packets are infact reaching the dd-wrt router for 192.168.1.1, even though if I get no answer. Trying to reach one of the RFC1918 clients from the public IP server will get no result, as the dd-wrt router is not announcing that network on to its external interface (arp who-has 192.168.1.107 tell xxx.xxx.xxx.xxx, but no answer). The router being an WLAN dd-wrt router has of course a load of routes, VLANs and interfaces: xxx.xxx.xxx.1 dev vlan2 scope link 192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.244 84.215.64.0/18 dev vlan2 proto kernel scope link src xxx.xxx.xxx.xxx 169.254.0.0/16 dev br0 proto kernel scope link src 169.254.255.1 127.0.0.0/8 dev lo scope link 0.0.0.0 via xxx.xxx.xxx.1 dev vlan2 xxx.xxx.xxx.xxx being the public IP, and xxx.xxx.xxx.1 being the default route for the public IP. I am not sure where to continue with this. I would recon that I both need routing on the dd-wrt router, as well as some iptables magic? Why do something this complex? Why not ;) Also, do not mind that "Internet" can get RFC1918 traffic, it wont go outside of the walls. EDIT 1: Following the tip from stew I do indeed get the correct ARP flowing. And adding an iptables rule for allowing traffic from that specific public IPd machine I get traffic between the systems! Oddly enough though, the speed I get from Server w/pub IP - RFC1918 clients are the same as if the traffic were routed out onto the Internet and back. Edit 2: Ok, disconnecting the external Internet connection will still give the same, crappy transfer speed. So it has to be something else. Edit 3: Ok, I guess there are other reasons for this crappy speed. Case closed. :)

    Read the article

  • insserv: Script <SCRIPT_NAME> is broken: missing end of LSB comment.

    - by udo
    I am getting this error when running: insserv -r udo-startup.sh insserv: Script udo-startup.sh is broken: missing end of LSB comment. insserv: exiting now! The content of udo-startup.sh is this: #!/bin/bash ### BEGIN INIT INFO # Provides: udo-startup.sh # Required-Start: $local_fs $remote_fs $network $syslog # Required-Stop: $local_fs $remote_fs $network $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: - # Description: - ### END INIT INF ID=$(xinput list | grep -i touchpad | sed '/TouchPad/s/^.*id=\([0-9]*\).*$/\1/') xinput set-prop $ID "Device Enabled" 0 exit 0

    Read the article

  • Should I be worry about max number of files in a folder in *NIX filesystems?

    - by ??????
    In a social networking project we want to store user's avatars in a folder. I think in one year or two it'll reach to 140K (I've seen this issue before and it will be around this number). I want to spread files in folders. If a folder contains 1000 files then create another folder and do store files from 1001 to 2000. Is this a good approach or I'm just very cautious about the issue? (File system : EXT3)

    Read the article

  • How to add message that will be read with dmesg?

    - by calandoa
    I am trying to write some custom messages in my dmesg output. I tried: logger "Hello" but this does not work. It exits without error, but no "Hello" appears int the output of: dmesg I am using a Fedora 9, and it seems that there is no syslogd/klogd daemon running. However, all my kernel messages are succesfully written in the dmesg buffer. Any idea?

    Read the article

  • Cobalt Qube 3 Unresponsive

    - by Devin Gund
    I recently bought a used Cobalt Qube 3 server off of eBay. The seller listed it as working before it was shipped. After starting it up for the first time, the "Cobalt Networks" logo scrolled across the LCD screen (this is normal), but then it stayed there. It will not go away, and the server will not respond to any of the buttons besides turning it off. I read the manual, and it does not say anything about this normally occurring. If necessary, could anyone walk me through or provide links to a reset tutorial? I have the OS Restore CD. Please help? I'm not sure why my question is being downvoted but if you need any more information please comment.

    Read the article

  • black backgrounds appear grey on gnome-terminal

    - by Martin DeMello
    Running gnome under Ubuntu Lucid $ env | grep TERM TERM=xterm COLORTERM=gnome-terminal I had to edit both my .muttrc and my vim colorscheme to change the background color from black to none in order to get a proper black background (or, more accurately, to retain the terminal's default black background). Setting it to black resulted in a dark grey background. This only happens with gnome-terminal; konsole, xterm and rxvt are fine.

    Read the article

  • Monit can't detect MySQL, but I can

    - by Matchu
    Monit is configured to watch MySQL on localhost at port 3306. check process mysqld with pidfile /var/lib/mysql/li175-241.pid start program = "/etc/init.d/mysql start" stop program = "/etc/init.d/mysql stop" if failed port 3306 protocol mysql then restart if 5 restarts within 5 cycles then timeout My application, which is configured to connect to MySQL via localhost:3306, is running just fine and can access the database. I can even use MySQL Query Browser to connect to the database remotely via port 3306. The port is totally open and possible to connect to. Therefore, I'm pretty darn certain that it's running. However, running monit -v reveals that Monit cannot detect MySQL on that port. 'mysqld' failed, cannot open a connection to INET[localhost:3306] via TCP This happens consistently, until Monit decides not to track MySQL anymore, as configured. How can I begin to troubleshoot this issue?

    Read the article

  • Nginx Forward SSL for single site

    - by Will.brown
    I have a nginx server setup and it works fine for http however i would like to bypass the proxy for https connection. I want it so that when someone goes to my ip https:// ip1 (Nginx server) it bypasses ngix and forwards all traffic to https:// ip2(webserver) i do not need ngix to do this for any ssl website just one particular website. SO Client to https:// ip1 to https:/ /ip2 to https:// ip1 to client pc I just want the nginx to not intercept the connection and forward it on and on return forward the connection to client Im guessing i do this by nat mascarade buy not exactly sure how to do it and if i will need to tell nginx to ignore ssl aswell can someone help me please this has gone me stuck

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Exploratory Question for Security Admins (/etc/passwd + PHP)

    - by JPerkSter
    Hi everyone, I've been seeing a few issues lately on a few of my servers where an account gets hacked via outdated scripts, and the hacker uploads a cPanel / FTP Brute forcing PHP script inside the account. The PHP File reads /etc/passwd to get the usernames, and than uses a passwd.txt file to try and brute force it's way in to 127.0.0.1:2082. I'm trying to think of a way to block this. It doesn't POST anything except "GET /path/phpfile.php", so I can't use mod_security to block this. I've been thinking of maybe changing permissions on /etc/passwd to 600, however I'm unsure how this will result in regards to my users. I was also thinking of rate-limiting localhost connections to :2082, however I'm worried about mod_proxy being affected. Any suggestions?

    Read the article

  • How do I configure additional phone lines asterisk/trixbox?

    - by Matt
    I have a 4 port Digium card in there, and have 4 lines running smoothly. Now, we added ANOTHER 4 port card and have 4 more analog lines coming into the Trixbox server. It still runs the 4 fine, but what do I need to do to add the additional 4 phone numbers/lines? I want it to act exactly as before, there's nothing special about the new lines. We just need more lines so that when we have 4 out of state customers call, we can have 4 more call and not get the busy signal. Trixbox CE 2.8

    Read the article

  • Screen multiuser - Permission denied

    - by Zlug
    I'm trying to send input to a screen session from php. So far I have followed the steps explained here Is running GNU Screen suid root the only way to make multiuser mode work? And I have set "multiuser on" and "acladd www-data" in the screenrc file (or well, no. in another file that I use by the -c option but still) My problem now is that whenever i try to acess screen by php exec('screen -S user/session -p 0 -X stuff "test"'."\n", $ret); I get the error: Cannot opendir /var/run/screen/S-user: Permission denied

    Read the article

  • Hang while starting several daemons [solved]

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before. Update3: I found the problem. My /etc/nsswitch.conf specified ldap as hosts lookup backup, which is not available at that time of the boot. Relying on dns solely fixes my boot problems.

    Read the article

  • have a bash script remotely shutdown another computer on the lan

    - by gletscher
    Hi I want to write a bash script that when called shuts down another computer on the lan. Maybe using ssh? The other computer is an ubuntu machine. Now I'm not sure how to send e.g. a sudo shutdown -h now command from withing a bash script to the ssh after logging in. Also I'm not sure how to obtain the rights for the sudo command, hence how to handle the communication between the server and client from within a bash script. Any suggestions are greatly appreciated.

    Read the article

  • Owner of uploads directory is `www-data` but this prevents FTP access via PHP scripts

    - by letseatfood
    To allow write access to Apache, I needed to chown www-data:www-data /var/www/mysite/uploads to my site's upload folder. This allows me to delete files from the folder via unlink() in a PHP script. Unfortunately, this prevents another PHP script, which uses FTP functions, from working. I think it is because the FTP user is mike and now that the uploads directory is owned by www-data, mike cannot access it. I added mike to the group www-data, but this does not fix the issue. Can somebody advise me on how to allow PHP FTP functions to work in addition to file deletion using PHP's unlink() function?

    Read the article

  • Give root password for maintenance

    - by Jevgeni Smirnov
    After entering shutdown now in terminal I get everything running normally and then: All processes ended withing 2 seconds...done INIT: Going single user INIT: Sending processes the TERM signal INIT: Sending processes the KILL signal Give root password for maintenance(or.... I press Ctrl+D, and it shows me login screen Debian. Shutdown through GUI works properly. UPDATE 1 It seems some process hangs. Moreover I'v managed to poweroff server through several retries. Recently i'v installed only ntp and ntpdate. Nothing more. I suppose it might be it conflicting with iptables.

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Trixbox CentOS Default GW Problem (Multi-homed server)

    - by slashp
    I'm having an issue with a CentOS trixbox server which is dual-homed (one private facing NIC [eth1], one internet-facing NIC [eth0]). I can't seem to get the default gateway to set properly to our ISP's GW via eth0. I've modified the /etc/sysconfig/network to contain both a GATEWAY & GATEWAYDEV line and removed the GATEWAY line from /etc/sysconfig/network-scripts/ifcfg-eth1 (as well as /etc/sysconfig/network-scripts/ifcfg-eth0). No default GW shows up in the routing table unless it's specified in the ifcfg-eth1 file (which both the wrong interface and wrong gateway IP), otherwise, the routing table simply does not contain a default gateway..any ideas would be greatly appreciated! Thanks! EDIT Just realized when attempting to add the default gateway manually using the route add command, I receive an error stating: SIOCADDRT: Network is unreachable I know this error can occur when your default gateway and interface IP address are not on the same subnet..in this case, my public IP address of eth0 is a /29.

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >