Search Results

Search found 24646 results on 986 pages for 'linux vserver'.

Page 366/986 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • How to fill in the network line in the ubuntu interfaces config file?

    - by matnagel
    I have to configure an ubuntu hardy server network interface. The service hoster told me that this is the network data for the machine: IP Range: 111.111.200.74 to 111.111.200.78 Netmask: 255.255.255.248 Broadcast: 111.111.200.79 Gateway: 111.111.200.73 Subnet: 111.111.200.72/29 I am only using the first IP address. I will update the /etc/hosts file with 111.111.200.74, but I am still unsure how the /etc/network/interfaces file should be. This is my plan: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 111.111.200.74 netmask 255.255.255.248 network 111.111.200.??? broadcast 111.111.200.79 gateway 111.111.200.73 As you can see I don't know how to build the network line. How would I calculate the data for the network line and what is the result? (I changed the first 2 octets of the subnet, they are not "111.111" in the real setup.)

    Read the article

  • Get an object by its objectGUID using ldapsearch

    - by orsogufo
    If I have the objectGUID attribute as returned by the ldapsearch command, how can I search the whole directory for an object with that objectGUID? For example, if I search a user getting its objectGUID, I get the following: ldapsearch -x -D $MyDn -W -h $Host -b "dc=x,dc=y" "(mail=something)" objectGUID # 7f435ae312a0d8197605, p, Externals, x.y dn: CN=7f435ae312a0d8197605,OU=p,DC=x,DC=y objectGUID:: b+bSezFkKkWDmbIZiyE5rg== Starting from the value b+bSezFkKkWDmbIZiyE5rg==, how can I create a query string to get that object?

    Read the article

  • cannot print from flash-player plugin

    - by eleven81
    I am running flash-player plug-in 10.0.32.10 inside of Firefox on a SLED 11 machine. Firefox can print to the network printer without issue from File Print. However, I cannot get the flash-player plugin to print at all. The print dialog comes up, asks for which printer, and which pages. I click Print and it was as if I had pressed cancel. Is this a known issue?

    Read the article

  • terminal tools and logs for debugging TCP issues

    - by kellogs
    I have a server which I am testing for functionality (not load, not stress) with tsung. 50 users / second, 100 total users. Judging from tsung (tsung is the testing framework) graphs, there TCP connections (red line) drops to 0 while the commenced user sessions (green line) does not. Server logs show nothing to be gripping onto, so I am speculating some kind of TCP issue. Should this be the case ? Where would I look further on the server, any logs / tools to be looking at ? Only SSH available, no GUI. > root@XMPP:~# cat /etc/lsb-release > DISTRIB_ID=Ubuntu > DISTRIB_RELEASE=11.10 > DISTRIB_CODENAME=oneiric > DISTRIB_DESCRIPTION="Ubuntu 11.10" Thank you

    Read the article

  • Ubuntu 9.10 X Stuck in restart loop - I think...

    - by widgisoft
    Trying out Ubuntu, installation went fine - upgraded to the proprietary nVidia drivers but on restart I get a login prompt and the screen is flashing really fast almost as if Xserver is trying to start and failing, I can type when the screen isn't in a "flash" as it were and it's so fast and random it's hard to even type a login name without it missing some characters - this makes typing a password (i.e. not being able to see which characters made it or not) very hard. I can log back into the live cd and alter my settings - but I can't even find out how to stop X stop starting on boot; Looks like they've moved everything around :-p I'd like to: Stop X from crashing and going insane (if it is actually Xserver) Know how to stop X from starting on bootup, Looks like interactive boot is also off by default now Update: A temporary work around seems to be enabling ssh and just connecting to the box over the network - ssh seems to work fine :-p Cheers, Chris

    Read the article

  • freeradius maximum session time problem

    - by haw3d
    hello I'm using openvpn and free-radius for control user accounts. for maximum session time for an user, free-radius has sqlcounter.conf that control that, but after a connection has disconnected that is useful and cannot destroy a connection. for control account time dynamically i need another script that do that. but should anytime that a connection has established a trigger run. is anyway to fire a custom trigger or script when a connection has established? or any way to control session time dynamically?

    Read the article

  • Easy shorewall question : allow ips to DNAT

    - by llazzaro
    Hello, At my home network I had a transparent proxy. This is the rule that forward all 80 traffic to my squid3.1 server at DMZ DNAT loc:!10.0.0.126 dmz:172.16.0.198:3128 tcp 80 - !172.16.0.198 Ok, I need to add more ips to avoid transparent proxy. I tried loc:!10.0.0.134,!10.0.0.126...but didnt work (also similars like [ip0,ip1]. I tried to google the answer cant find it (sorry no matches, not searching the right keywords) also I tried to read the docs, but they are really long (and indexes dont help me). Thanks!

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • How restore back up email files in qmail

    - by Maysam
    I have problem with restoring some old backup mail files in a mail server that uses qmail. The problem is, when I copy a new email file to the /cur directory, the number of emails in front of inbox increases, but when I click on the inbox, I don't see the newly copied email. I can only see the old emails. I also deleted maildirsize and courierimapuiddb files and they where automatically created again, but it didn't help and I cannot still see the email in my inbox. Is there something I am missing? How can I restore the backed up email files? Please note that when I copy the email files in /.sent-mail/cur directory, they are all displayed in my sent box, but that doesn't happen for inbox files in /cur directory.

    Read the article

  • Simple one-way synchronisation of user password list between servers

    - by Renaud Bompuis
    Using a RedHat-derivative distro (CentOS), I'd like to keep the list of regular users (UID over 500), and group (and shadow files) pushed to a backup server. The sync is only one-way, from the main server to the backup server. I don't really want to have to deal with LDAP or NIS. All I need is a simple script that can be run nightly to keep the backup server updated. The main server can SSH into the backup system. Any suggestion? Edit: Thanks for the suggestions so far but I think I didn't make myself clear enough. I'm only looking at synchronising normal users whose UID is on or above 500. System/service users (with UID below 500) may be different on both system. So you can't just sync the whole files I'm afraid.

    Read the article

  • Makefile fails to install file correctly, installing HPL

    - by zarose
    I started installing HPL a while ago, and had a related question. I've been following along with this guide from Intel. I figure this warrants a whole new one. When I try to make the archive, the output seems fine until the end, where it gives an error. make[2]: Entering directory `/hpl-2.0/src/auxil/intel64' Makefile:47: Make.inc: No such file or directory make[2]: *** No rule to make target `Make.inc'. Stop. make[2]: Leaving directory `/hpl-2.0/src/auxil/intel64' make[1]: *** [build_src] Error 2 make[1]: Leaving directory `/hpl-2.0' make: *** [build] Error 2 Going to the directory /hpl-2.0/src/auxil/intel64 shows a file, "Make.inc", but it's highlighted red, and the white text blinks. Is there a way to manually make that file? What do I need to do to get the makefile to do this for me?

    Read the article

  • IP Blacklists and suspicious inbound and outbound traffic

    - by Pantelis Sopasakis
    I administer a web server and recently we had our IP banned (!) from our host after they received a notification e-mail for abuse. In particular our server is allegedly involved in spam attacks over HTTP. The content of the abuse report email we received was not much informative - for example the IP addresses our server is supposed to have attacked against are not included - so I started a wireshark session checking for suspicious traffic over TCP/HTTP while trying to locate possible security holes on the system. (Let me note that the machine runs a Debian OS). Here is an example of such a request... Source: 89.74.188.233 Destination: 12.34.56.78 // my ip Protocol: HTTP Info: GET 'http://www.media.apniworld.com/image.php?type=hv' HTTP/1.0 I manually blacklisted this host (as well as some other ones) blocking them with iptables, but I can't keep on doing manually all day long... I'm looking for an automated way to block such IPs based on: Statistical analysis, pattern recognition or other AI-based analysis (Though, I'm reluctant to trust such a solution, if exists) Public blacklists Using DNSBL I actually found out that 89.74.188.233 is blacklisted. However other IPs which are strongly suspicious like 93.199.112.126 (i.e. http://www.pornstarnetwork.com/account/signin), unfortunately were not blacklisted! What I would like to do is to automatically connect my firewall with DNSBL (or some other blacklist database) and block all traffic towards blacklisted IPs or somehow have my local blacklist automatically updated.

    Read the article

  • Using Gentoo's `ebegin`, `eend` etc under Ubuntu

    - by Marcus Downing
    We're quite fond of the style of the ebegin, eend, eerror, eindent etc commands used by Portage and other tools on Gentoo. The green-yellow-red bullets and standard layout make for very quick spotting of errors, on what would otherwise be very grey command line output. #!/bin/sh source /etc/init.d/functions.sh ebegin "Copying data" rsync .... eend $? Producing output similar to: * Copying data... [ OK ] As a result we're using these commands in some of our common shell scripts, which is a problem for the people using Ubuntu and other linuxes. (linuces? linuxen? linucae? other distros) On Gentoo these functions are provided by OpenRC, and imported with functions.sh file (whose exact position seems to vary slightly). But is there a simple way of getting these commands on Ubuntu? In theory we could replace them all with dull echos, but we'd rather not?

    Read the article

  • Extremely high mysqld CPU usage with no active queries

    - by RadarNyan
    I have a VPS running Ubuntu 12.04 LTS with LEMP stack, followed the guide from Linode Library (since I'm using a Linode) to setup, and everything worked fine until now. I don't know what's wrong, but my CPU usage just goes up since a week ago. Today things getting really bad - I got 74% CPU usage so I went check and found that mysqld taking too much CPU usage (somewhere around 30% ~ 80%) So I did some Google Search, tried disable InnoDB, restart mysql, reset ntp / system clock (Isn't this bug supposed to happen more than a year ago?!) and reboot my VPS, nothing helped. Even with mysql processlist empty, I still get mysqld CPU usage very high. I don't know what I missed and have totally no idea, any advice would be appreciated. Thanks in advance. Update: I got these from running "strace mysqld" write(2, "InnoDB: Unable to lock ./ibdata1"..., 44) = 44 write(2, "InnoDB: Check that you do not al"..., 115) = 115 select(0, NULL, NULL, NULL, {1, 0}^[[A^[[A) = 0 (Timeout) fcntl64(3, F_SETLK64, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}, 0xbfa496f8) = -1 EAGAIN (Resource temporarily unavailable) hum... I did tried to disable InnoDB and it didn't fix this problem. Any idea? Update2: # ps -e | grep mysqld 13099 ? 00:00:20 mysqld then use "strace -p 13099", the following lines appears repeatedly: fcntl64(12, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, NULL}, [2]) = 14 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(14, {sa_family=AF_FILE, path="/var/run/mysqld/mysqld.sock"}, [30]) = 0 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0", 8) = 0 fcntl64(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0xb786a584, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xb786a580, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0xb7869998, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=12, revents=POLLIN}]) er... now I totally don't get it x_x help

    Read the article

  • htaccess rewrite different folder url, two index files

    - by Andrew
    I've been searching for awhile now and haven't found anything that comes close to what I'm trying to accomplish. Right now my URL's look like this: www.website.com/something which are using the root folder /index.php Now I have created plugins within folders: /plugins/PLUGINNAME/index.php I want to be able to have URLs like: www.website.com/plugins/PLUGINNAME/anything/iwant/here which are all using /plugins/PLUGINNAME/index.php and not the root directory index.php. Currently www.website.com/plugins/PLUGINNAME/ works, but anything after /PLUGINNAME/xxx defaults to the /index.php.

    Read the article

  • Screen multiuser - Permission denied

    - by Zlug
    I'm trying to send input to a screen session from php. So far I have followed the steps explained here Is running GNU Screen suid root the only way to make multiuser mode work? And I have set "multiuser on" and "acladd www-data" in the screenrc file (or well, no. in another file that I use by the -c option but still) My problem now is that whenever i try to acess screen by php exec('screen -S user/session -p 0 -X stuff "test"'."\n", $ret); I get the error: Cannot opendir /var/run/screen/S-user: Permission denied

    Read the article

  • Copying a large directory tree locally? cp or rsync?

    - by Rory
    I have to copy a large directory tree, about 1.8 TB. It's all local. Out of habit I'd use rsync, however I wonder if there's much point, and if I should rather use cp. I'm worried about permissions and uid/gid, since they have to be preserved in the clopy (I know rsync does this). As well as thinks like symlinks. The destination is empty, so I don't have to worry about conditionally updating some files. It's all local disk access, so I don't have to worry about ssh or network. The reason I'd be tempted away from rsync, is because rsync might do more than I need. rsync checksums files. I don't need that, and am concerned that it might take longer than cp. So what do you reckon, rsync or cp?

    Read the article

  • Kerberos: Running an app with a parameter using krenew

    - by Mihai Todor
    I need to run an application with krenew, but the application also needs to receive a parameter via command line and I need to send its output to a file. From the documentation, it looks like this should do the trick: krenew -t -- sh -c 'compute-job > /afs/local/data/output' but, unfortunately, when I run the command below: krenew -s -- sh -c './my_app config.xml > results/test.txt &' the application just dies after a while and I can see from the output of ps aux that krenew is not running along with my_app. I am not sure what the parameter -t does, and as far as I can see, if I run krenew -s ./my_app, it works properly. I hope someone can clarify this.

    Read the article

  • Problems getting Cron to run processes tagged @reboot for LDAP users

    - by Ben Torell
    I have a lab of computers running Ubuntu 9.10. Most of the people who log on to these computers are users from an LDAP server, and not local users. We discovered that if an LDAP user has a crontab with an entry marked to be run @reboot, the command will not actually run upon the reboot of a machine. I'm pretty sure that this is because the cron daemon starts before networking is fully up, so the crontabs of any LDAP users aren't loaded and run or checked for @reboot. In fact, cron will ignore LDAP users' crontabs entirely after a reboot until that user runs crontab -e again and saves, or until the cron daemon is rebooted. We were able to fix one part of this problem by adding the following line to /etc/crontab: @reboot root /bin/sleep 45 && /etc/init.d/cron restart Thus, when cron starts back up upon a reboot, it waits for networking to get up, then restarts the cron daemon. That fixes the problem of crontabs not being read at all for LDAP users. However, since it's the cron daemon being restarted and not the computer, @reboot entries are ignored. Is there a way for a user to make a command run upon restarting the daemon, rather than a reboot? Or is there a better solution to this overall problem? Thanks.

    Read the article

  • UAE and the mysteries of unreachable websites

    - by 0plus1
    I write here because I'm really lost, please stay with me because it's not easy to explain. A company asked me to set-up a private server, now I'm a programmer so I got a solution with technical support and cpanel which helped me to setup everything and it's working smoothless. I'm by no means a professional sysadmin, but I have a fair knowledge of server configurations, but this problem is way over my knowledge, and apparently way over the knowledge of most sysadmins, I really hope that here I'll find someone with enough experience to help me or at least give me more insight. Now this company for which I'm consulting operates in the UAE (United Arab Emirates) and from there the server is almost unreachable. It started with ns not registering in the UAE, after a week that sorted itself out and now the site is indeed reachable, but it takes almost 2 minutes to load a webpage with one line of text. Emails go in timeout. The domain currently parked there has been bought appositely for tests, the main one that was supposed to go there, after a catastrophic week has been transferred to a shared hosting solution in the UK, and from there it works like a charme. Now after doing some research I discovered that I'm not alone in this, there are several reports of webmasters discovering that their website is not reachable inside the UAE, and mind this has nothing to do with the state-wide block of questionable sites, because in that case an error message appears, this seems to be related to the infrastructure of the UAE, which apparently reroutes everything through their own "fake" internet. Apparently new servers with their own IP are not recognized (yet?) by the UAE infrastructure, while shared hosting solutions seeing that they operates tons of other websites are more likely to be part of the UAE network. Now my questions are: 1) Has someone a real explanation for this? The only thing I can think of is that the server is on a new IP that is not yet recognized by the UAE, but that doesn't explain why it loads (even if after 2 minutes). I don't have any help from within the UAE as the only people that are "experts" are questionable companies that simply try to sell their own services. 2) If there is really some kind of block of new servers, is it possible to know before if a server is reachable from within the UAE, currently this is not a ns problem as even accessing the server with its IP result in a 2 minute wait. 3) Can it be that the problem lies somewhere else? There are some tests that I can perform? I'm not physically in the UAE, but I can ask the people there, or use teamviewer. Could it be some misconfiguration on the server (mind that the site works EVERYWHERE else in the world). Thank you for ANY kind of help

    Read the article

  • Triple monitor setting in Linux with USB-HDMI adapter

    - by Oscar Carballal
    I'm trying to set up a triple monitor desktop at my office using Fedora 17, but it seems impossible, let me explain the setting: Laptop ASUS K53SD with 2 graphic cards, Intel and nVidia (Screen controled by Intel card) 24" Full HD monitor connected to the HDMI output (controlled by Intel card) 23" Full HD monitor connected to an USB-HDMI adapter (via framebuffer in /dev/fb2, apparently) VGA output (not used) controlled by nVidia card First of all, the USB-HDMI adapter works perfectly, it gives me a green screen (which means the communication is OK) and I can make it work if I set up a single monitor setting via framebuffer in Xorg. Here I leave the page where I got the instructions: http://plugable.com/2011/12/23/usb-graphics-and-linux Now I'm trying to set up the the two main monitors (laptop and 24") with the intel driver and the 23" with the framebuffer, but the most succesful configuration I get is the two main monitors working and the third disconnected. Do you have any idea what can I do to make this work? Here I leave my xRandr output and my Xorg conf: -> xrandr Screen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA2 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 531mm x 299mm 1920x1080 60.0*+ 50.0 25.0 30.0 1680x1050 59.9 1680x945 60.0 1400x1050 74.9 59.9 1600x900 60.0 1280x1024 75.0 60.0 1440x900 75.0 59.9 1280x960 60.0 1366x768 60.0 1360x768 60.0 1280x800 74.9 59.9 1152x864 75.0 1280x768 74.9 60.0 1280x720 50.0 60.0 1440x576 25.0 1024x768 75.1 70.1 60.0 1440x480 30.0 1024x576 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 720x576 50.0 848x480 60.0 720x480 59.9 640x480 72.8 75.0 66.7 60.0 59.9 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) 1920x1080_60.00 60.0 The Xorg file: # Xorg configuration file for using a tri-head display Section "ServerLayout" Identifier "Layout0" Screen 0 "HDMI" 0 0 Screen 1 "USB" RightOf "HDMI" Option "Xinerama" "on" EndSection ########### MONITORS ################ Section "Monitor" Identifier "USB1" VendorName "Unknown" ModelName "Acer 24as" Option "DPMS" EndSection Section "Monitor" Identifier "HDMI1" VendorName "Unknown" ModelName "Acer 23SH" Option "DPMS" EndSection ########### DEVICES ################## Section "Device" Identifier "Device 0" Driver "intel" BoardName "GeForce" BusID "PCI:0:02:0" Screen 0 EndSection Section "Device" Identifier "USB Device 0" driver "fbdev" Option "fbdev" "/dev/fb2" Option "ShadowFB" "off" EndSection ############## SCREENS ###################### Section "Screen" Identifier "HDMI" Device "Device 0" Monitor "HDMI1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "USB" Device "USB Device 0" Monitor "USB1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • how to disable these logs on the screen?

    - by user62367
    using Fedora 14: http://pastebin.com/raw.php?i=jUvcfugw i mount an anonym Samba share [checks it in every 5 sec] it's working, ok, great! But: when i shut down my Fedora box, i can see the lines containing this scripts lines! Many times, about ~50x on the screen. How could i disable these lines when shutting down? I [and other people] don't want to see those lines for about ~ 5 sec Thank you!

    Read the article

  • View hidden contents of usb device

    - by Srikanth Suresh
    I have a USB with the following contents on ls -lah total 8.0K drwx------ 1 srikanth srikanth 4.0K May 27 22:54 . drwxr-xr-x 4 root root 4.0K May 28 19:37 .. -rw------- 2 srikanth srikanth 0 May 27 22:52 Files.az3w On viewing the properties of the folder I have the following information: 90.4Mb used and 16.1GB free There is data in this pen drive which I am currently unable to view also it is sensitive. After searching about hiding contents in a USB I think that there is a hidden partition here that I cant access. How should I proceed to view the contents without damaging the files already present?

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >