Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 414/1048 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • How can I make the NetworkManager work?

    - by Yang Jy
    I am running a version of RHCE6 on my laptop, and lately I've been trying various stuff about network configuration through command line. Last night, I tried removing the NetworkManager using "yum remove NetworkManager" from the system, so that I could have more control of the network through the command line. But the result is, I didn't manage to configure the wireless connection through wpa_supplicant, and I need wireless connection during my travel to another place. So I need the wireless function back as soon as possible. I typed " yum install NetworkManager", some version installed, but I don't get to have an icon on the taskbar, and of course, the network doesn't work. The package I previously removed(about 24MB) was much larger that the one I just installed(about 2MB), so I think some dependencies must be missing. How could I install all these dependencies? Please help!

    Read the article

  • Setting per-directory umask using ACLs

    - by Yarin
    We want to mimic the behavior of a system-wide 002 umask on a certain directory foo, in order to ensure the following result: All sub-directories created underneath foo will have 775 permissions All files created underneath foo and subdirectories will have 664 permissions 1 and 2 will happen for files/dirs created by all users, including root, and all daemons. Assuming that ACL is enabled on our partition, this is the command we've come up with: setfacl -R -d -m mask:002 foo This seems to be working- I'm basically just looking for confirmation. Is this the most effective way to apply a per-directory umask with an ACL?

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • Routing with VPN and asymmetric communication

    - by Louis
    I'm stumbling on a problem that requires your advice. Keywords : networking, route, openVPN Problem : I have a local network with several physical servers and VMs. These machines have ip's in the range 10.10.x.x. I can access these machines from the Internet with the help of openVPN. These machines can : access each other within the local 10.10.x.x subnet access the Internet via the VPN can themselves be accessed (via SSH) from the Internet via the VPN. There is one machine however that behaves strangely and I don't know why. I can SSH into this machine from anywhere via SSH and I can also PING it from anywhere (including the Internet). However from this machine (i.e. when logged into it) I cannot access the Internet or ping machines outside the local network. In other words it will not go beyond the VPN. My question is why? Here are some technical details: The machine's Network Config (running Debian 6.0.3): allow-hotplug eth0 iface eth0 inet static address 10.10.10.200 netmask 255.255.0.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.200 The machine's Routing : Destination Gateway Genmask Flags MSS Window irtt Iface 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.10.0.0 10.10.10.250 255.255.0.0 UG 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.10.10.200 0.0.0.0 UG 0 0 0 eth0 The VPN's Network Config (running Debian 6.0.3): # This is the local network interface auto eth1 allow-hotplug eth1 iface eth1 inet static address 10.10.10.250 netmask 255.255.0.0 broadcast 10.10.10.255 gateway 10.10.10.250 The VPN's routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0 private 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 private 0.0.0.0 UG 0 0 0 eth0 net.ipv4.ip_forward = 1 on both machines. there are no iptables set anywhere. Thanks in advance for any feedback.

    Read the article

  • Amazon Web Services : mise à jour de l'environnement Linux, avec les dernières versions de MySQL, Python, Ruby et le Kernel 3.2

    Amazon Web Services : mise à jour de l'environnement Linux avec les dernières versions de MySQL, Python, Ruby et le Kernel Linux 3.2 Amazon Web Services (AWS) vient de procéder à une mise à jour majeure d'Amazon Linux AMI. L'image du système d'exploitation Linux qui s'exécute sur la plateforme intègre désormais les versions les plus récentes de TomCat, MySQL, Python, GCC, Ruby, etc. Cette version a été construite avec pour objectif principal de permettre aux entreprises de migrer ou de rester sur les anciennes versions des outils. Ainsi, les organismes peuvent exécuter différentes versions majeures des applications et langages de programmation. Ceci permet au code qui s'appuie su...

    Read the article

  • How do you backup your own files? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • How do I start mysqld with options

    - by xiankai
    I need to start up mysqld with command line options as from here: http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_skip-grant-tables I normally do sudo service mysqld start, but passing the option as sudo service mysqld start --skip-grant-tables does not seem to work. Alternatively I have tried starting as a daemon, sudo mysqld_safe --skip-grant-tables & But it seems to terminate too soon: 131101 04:59:57 mysqld_safe Logging to '/var/lib/mysql/vagrant.example.com.err'. 131101 04:59:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131101 05:00:03 mysqld_safe mysqld from pid file /var/lib/mysql/vagrant.example.com.pid ended My last option seems to specify the option in /etc/my.cnf instead, but is there any way to do it via the command line?

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • ubuntu 12.04 - keep getting "Server not found" for some websites

    - by android developer
    ever since last week , i've noticed that many websites cannot be accessed , and it doesn't matter if i use firefox or chromium as a web browser . as an example of such a website is: http://tutorials-android.blogspot.co.il/2011/05/layout-animation-in-android.html all i get is a "Server not found" error page . sometimes after a few refreshes it works just fine . i've checked it on a windows OS machine that is connected to the exact same LAN network , and the website is shown just fine . i've also checked the /etc/hosts file and it doesn't contain anything suspicious . what is going on? how can i fix it?

    Read the article

  • rsync --files-from (find + cat)

    - by Edward
    I try command rsync -v --files-from=/path/to/list.lst /home/user /path/to/backup list.lst contains for example .gnupg/ .pki/ .gnome2/keyrings/ .mozilla/firefox/*.default/bookmarkbackups/ .mozilla/firefox/*.default/bookmarks.html .mozilla/firefox/.default/.db files .mozilla/firefox/.default/.sqlite and i get error on all strings with * "failed: No such file or directory". Can anybody help me, or as variant can i combine find `cat /path/to/list.lst` with rsync?

    Read the article

  • How to secure postfix to find out whether the emails are coming really from the sender?

    - by codeworxx
    Is it possible to secure postfix in a way, that incoming emails are checked on whether the email comes really from the sender? Is that possible to write php script and chose a sender, like the mail is really coming from the sender and what are the possibilities for postfix to find out that this mail is not actually coming from the real sender? What I have found out and activated are the options smtpd_sender_restrictions = reject_unknown_sender_domain unknown_address_reject_code = 554 smtpd_client_restrictions = reject_unknown_client unknown_client_reject_code = 554 Please mention, whether I have missed out on any points!

    Read the article

  • Command to determine whether ZooKeeper Server is Leader or Follower

    - by utrecht
    Introduction A ZooKeeper Quorum consisting of three ZooKeeper servers has been created. The zoo.cfg located on all three ZooKeeper servers looks as follows: maxClientCnxns=50 # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/var/lib/zookeeper # the port at which the clients will connect clientPort=2181 server.1=ip1:2888:3888 server.2=ip2:2888:3888 server.3=ip3:2888:3888 It is clear that one of the three ZooKeeper servers will become the Leader and the others Followers. If the Leader ZooKeeper server has been shutdown the Leader election will start again. The aim is to check if another ZooKeeper server will become the Leader if the Leader server has been shut down. Question Which command needs to be issued to check whether a ZooKeeper server is a Leader or a Follower?

    Read the article

  • Redirect all outgoing traffic on port 80 to a different IP on the same server

    - by Spacedust
    I have multiple IP addresses on the same server and I would like to redirect all outgoing traffic on port 80 to a different IP on the same server just no to use always main IP. Currently I'm using this: /sbin/iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source IP; and it works well, but it redirects everything and when I make backups over SSH backup it's failing. System: CentOS 5.8 64-bit

    Read the article

  • moving files and directories between two machine, via a third, preserving permissions and usernames

    - by Jarmund
    The situation is as follows: Machine A has a file repository accessible via rsync Machine B needs the above mentioned files with all permissions and ownerships intact (including groups etc) Machine C has access to both A and B, but has a completely different set of users. Normally, i would just rsync everything over, directly between A and B, but due to severely limited bandwidth at the moment, i need something different, as rsync times out after building the list of the 430 files (49Mb uncompressed... can be compressed down to ~7Mb). What i've tried so far: rsync everything over from A to C, tar it, copy the tarball over, and then untar it, however, this messes up the ownership and/or the permissions. To rsync it from A to C, i run this command: rsync --numeric-ids --password-file=/root/rsync_pwd_file -oaPvu rsync://[email protected]/portal_2/ ./portal_2/ ...and from the looks of things, they do end up on C with the correct ownerships/permissions/flags/everything (not 100% sure, though.. are there any more switches i can throw in there? did i miss something?) copying the tarball over is simple enough (slow as a one-legged turtle due to the bandwidth, but it checksums out alright) What i'm unsure of is the flags and switches for creating and extracting the tarball, so could someone please provide the full commands for creating a tarball from /root/portal_2 on machine C (with everything intact) and extracting the tarball into /var/ex/portal_2 on machine B? ? Also, are there any other approaches worth mentioning that could allow me to perform this? I have root access to A and C, whereas i only have rsync access to B. PS: I'm running rsync v2.6.9 on machine B, and unfortunately i do not have the oportunity to upgrade to v3

    Read the article

  • Unable to set initcwnd on a Hetzner server

    - by Sergi
    We just ordered a bunch of Hetzner EX40SSD servers with the minimal Debian install image that they provide and everything is just fine except that looking at tcpdumps for fine tuning the network from various locations the initcwnd param seems to be stuck at 6 no matter how we change it. By default Debian 3.2 kernels should have that setting to 10 so it's pretty strange. Is it possible that the NIC driver or a custom setting in the Hetzner Debian image is limiting this param? Even if we set it to 4, like the old kernel default, it doesn't work. Any ideas would be much appreciated! Does anyone know if the NIC drivers provided by default by Debian have some kind on limitation. In a long thread in http://www.webhostingtalk.com/showthread.php?t=1200617&highlight=hetzner they talk about a page http://wiki.hetzner.de/index.php/Installation_des_r8168-Treibers/en where Hetzner states that the included Realtek r8168 driver is not working properly, but nowhere do they say that the initcwnd could be affected. Tomorrow i will try to install a CentOs image and see if Debian is the problem...Last resort would be to install a custom debian image, but that is a pain in the ass! Thanks!

    Read the article

  • How to find proper codec for Xubuntu?

    - by smwikipedia
    I have just installed the Xubuntu. And I feel that to use it play a mp3 is like kill myself twice. I try to play it with Exaile, the boxed player with Xbuntu. But it says I need to install some mpeg codecs. I found so many depends with sudo apt-cache depends. How to install them? one by one?! Many thanks.

    Read the article

  • VSFTPD: Cannot figure this thing out...

    - by A Wizard Did It
    Alright, I've been giving this the best that I can, reading through various tutorials on google, but I cannot seem to get vsftpd running the way I want. For a short while I had it working with one account, but then that stopped and I haven't been able to get it to work since. I've since reformated and reinstall Ubuntu 10.04 LTS. I used apt-get install vsftpd and that's where I am now... I'd really appreciate if anyone could help me understand exactly how this is supposed to work... How do I add FTP accounts and set their home directory to something like /var/www/public_html?

    Read the article

  • remastering knoppix: which version and what method should i use

    - by Stan_
    I'd like to remaster knoppix (mainly add and configure some software). I downloaded newest version (KNOPPIX-ADRIANE_V6.2CD-2009-11-18-EN.iso) but later i read that it has some other window manager as default, not kde... and i want to have kde on my remaster. Is kde included on that iso but it's not default or it's not included at all? If it's not there what knoppix version should i get for my remaster? My other question... I've seen some remastering scripts (with menus, etc) on knoppix forums, do any of these works with version i have? Or with version i should have if i need kde?

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • How do you create virtual folders from saved search

    - by Jérôme Radix
    I would like to have on unix-like platforms, the same functionality as to Windows 7 Library folders (aka virtual folders) you see in Windows Explorer. Gnome Nautilus do that kind of virtual folders through saved search. But I want a system-wide solution, not a gnome-wide solution. Is there a tool that creates virtual folders from the concatenation of multiple search queries (the result of multiple find commands ?). The solution should index files for better performances and you should be able to define the default folder for copy operations. I assume the solution of this kind of problem certainly use FUSE, but I can't see a complete solution to this kind of task in FUSE applications.

    Read the article

  • clocksource tsc unstable

    - by amorfis
    Ok, now I have real server fault ;) After some time from booting (about one minute) my server hangs. All I can do is hard reset. Then after restart in /var/log/kern.log I can find: Jul 29 22:38:57 leonidas kernel: [ 90.729598] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 90.731252] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:57 leonidas kernel: [ 91.201461] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 91.201482] longhaul: Disabling ACPI C3 support. Jul 29 22:38:57 leonidas kernel: [ 91.204230] longhaul: Disabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.416133] longhaul: Failed to set requested frequency! Jul 29 22:38:58 leonidas kernel: [ 91.416152] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.960048] Clocksource tsc unstable (delta = -105611479 ns) I found some resources on the net, and it said to change clocksource, or disable ACPI. I tried disabling ACPI but it didn't help (but I noticed there was longer time before hanging). I can't change clock to hpet, because my system doesn't have such one. Output of cat /sys/devices/system/clocksource/clocksource0/available_clocksource: acpi_pm jiffies tsc My system is ubuntu server on VIA Epia hardware.

    Read the article

  • Running out of LowMem with Ubuntu PAE Kernel and 32GB of RAM

    - by magneticMonster
    I'm running a Java data import process on a 32-bit Ubuntu 10 PAE kernel machine. After running the process for a while, the oom-killer zaps my Java process. After some Googling and digging through docs, it looks like the system is running out of LowMem. I started the process for the third time and am watching free -lm show me Low: 464 386 77 with the free value (77MB) slowly decreasing. Why am I running out of lowmem and how do I increase it? Some details: $ cat /proc/sys/vm/lowmem_reserve_ratio 256 256 32 $ free -lm total used free shared buffers cached Mem: 32086 24611 7475 0 0 24012 Low: 464 407 57 High: 31621 24204 7417 -/+ buffers/cache: 598 31487 Swap: 2047 0 2047

    Read the article

  • pdflush hanging on Amazon EBS drives when using multi-GB files - any workaround?

    - by rhh
    Hello, When I run gunzip on a 1.7GB file (which generates an 8GB file) on an EBS volume, pdflush freezes after gunzip runs and the CPU hangs indefinitely at 100% IO Wait. Here's the output from 'ps aux | grep pdflush'. Note the D status root 87 0.0 0.0 0 0 ? D 06:18 0:00 pdflush root 88 0.0 0.0 0 0 ? D 06:18 0:00 pdflush The only solution is to kill the pdflush processes. The processes don't die immediately either. This problem is repeatable and happens with new instances. I'm running 2xlarge instances and I have way more RAM free than is being used (i.e. /proc/meminfo shows 20+GB MemFree) Has anyone found a workaround to this problem in the past? Thanks for any thoughts. Robert

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >