Search Results

Search found 24646 results on 986 pages for 'linux vserver'.

Page 409/986 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Explanation of nodev and nosuid in fstab

    - by Ivan Kovacevic
    I see those two options constantly suggested on the web when someone describes how to mount a tmpfs or ramfs. Often also with noexec but I'm specifically interested in nodev and nosuid. I basically hate just blindly repeating what somebody suggested, without real understanding. And since I only see copy/paste instructions on the net regarding this, I ask here. This is from documentation: nodev - Don't interpret block special devices on the filesystem. nosuid - Block the operation of suid, and sgid bits. But I would like a practical explanation what could happen if I leave those two out. Let's say that I have configured tmpfs or ramfs(without these two mentioned options set) that is accessible(read+write) by a specific (non-root)user on the system. What can that user do to harm the system? Excluding the case of consuming all available system memory in case of ramfs

    Read the article

  • unable to start apache after changes to rc.conf and resov.conf

    - by shupru
    I had a working configuration this morning with the following simple /etc/rc.conf ifconfig_rl0="DHCP" ifconfig_xl="inet 192.168.1.11 netmask 255.255.255." defaultrouter="192.168.1.1" I added the following lines: firewall_enable="YES" firewall_type="SIMPLE" firewall_logging="YES" sshd_enable="YES" apache_enable="YES" mysql_enable="YES" my httpd.conf includes: NameVirtualHost 192.168.1.11 <VirtualHost 192.168.1.11> ... </VirtualHost> now apache and ssh server are down. changed rc.conf back to last working configuration and still no ssh or apache apachectl start #--> /usr/local/sbin/apachectl start: httpd could not be started apachectl status #--> Looking up localhost Making http connection to localhost Alert!: Unable to connect to remote host.

    Read the article

  • VSFTPD: Cannot figure this thing out...

    - by A Wizard Did It
    Alright, I've been giving this the best that I can, reading through various tutorials on google, but I cannot seem to get vsftpd running the way I want. For a short while I had it working with one account, but then that stopped and I haven't been able to get it to work since. I've since reformated and reinstall Ubuntu 10.04 LTS. I used apt-get install vsftpd and that's where I am now... I'd really appreciate if anyone could help me understand exactly how this is supposed to work... How do I add FTP accounts and set their home directory to something like /var/www/public_html?

    Read the article

  • How to find proper codec for Xubuntu?

    - by smwikipedia
    I have just installed the Xubuntu. And I feel that to use it play a mp3 is like kill myself twice. I try to play it with Exaile, the boxed player with Xbuntu. But it says I need to install some mpeg codecs. I found so many depends with sudo apt-cache depends. How to install them? one by one?! Many thanks.

    Read the article

  • Fastest reliable way to open the terminal?

    - by meder
    I actually had my SUPER_L ( left windows key ) binded to gnome-terminal, but for whatever reason ever since upgrading to 9.04 Ubuntu from 8.10 Intrepid it seemed to break the key binding. It was very handy because I could throw open the terminal with one key ( sorry but alt-f2 and typing gnome-terminal isn't practical for me ). Or perhaps it reset all the keybindings? I remember using xev and some gui type interface that was akin to Win32 registry editor. Anyway, I'm curious as to what you guys use to open the terminal.

    Read the article

  • Apache worker is crashing after 3.000 users

    - by user1618606
    I activated Apache Worker on my VPS and I'm having problems, 'cause the website is crashing when 3000 users are accessing the website. I'm using http://whos.amung.us/stats/2jzwlvbhvpft/ as counter. My Apache Worker configuration: KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 1 <IfModule mpm_worker_module> ServerLimit 20000 StartServer 8000 MinSpareThreads 10400 MaxSpareThreads 14200 ThreadLimit 5 ThreadsPerChild 5 MaxClients 20000 MaxRequestsPerChild 0 </IfModule> The VPS have the SO: Debian 64 LAMP, memory: 14gb and CPU: 24ghz What I could to do to give a best performance?

    Read the article

  • Route eth0 to internet traffic and eth1 to local traffic

    - by Romain Caire
    How can I route all my internet traffic on eth0 (everything except 192.168.1.0/24) and route my local traffic through eth1 (192.168.1.0)? Here is my attempt : # Flush ALL THE THINGS. ip route flush table main # Restore the main table. I flushed it because OpenVPN does weird things to it. ip route add 127.0.0.0/8 via 127.0.0.1 dev lo ip route add 0.0.0.0/0 via 164.67.195.1 ip route add 192.168.1.0/24 via 192.168.1.1 ip route flush cache

    Read the article

  • Nothing is written in php5-fpm.log

    - by jaypabs
    I have two servers which is Ubuntu 12.04 and Ubuntu 14.04. When I use Ubuntu 14.04 in my new server and enabled the php-fpm log file found under /etc/php5/fpm/php-fpm.conf that reads as follows: error_log = /var/log/php5-fpm.log I noticed that most of the log that I found in Ubuntu 12.04 is not written in 14.04. For example, if I restart php5-fpm in my Ubuntu 12.04, a restart log is being written, however, this does not happen in 14.04. Another log which I missed in 14.04 are the following: [23-Aug-2014 16:23:03] NOTICE: [pool web42] child 118098 exited with code 0 after 12983.480191 seconds from start [23-Aug-2014 16:23:03] NOTICE: [pool web42] child 147653 started [23-Aug-2014 17:27:31] WARNING: [pool web8] child 76743, script '/var/www/mysite.com/web/wp-comments-post.php' (request: "POST /wp-comments-post.php") executing too slow (12.923022 sec), logging I really wanted to have this kind of log so I will know the length of time a slow script has executed. Does anyone know if there are other settings in Ubuntu 14.04 that I need to change in addition to /etc/php5/fpm/php-fpm.conf?

    Read the article

  • Tar dereference only 1 level

    - by Bart van Heukelom
    I use the following pseudo-script to create a TAR of my installed software mkdir tmp ln -s /path/to/app1/bin tmp/app1 ln -s /and/path/going/to/the-app-2 tmp/app2 tar -c --dereference -f apps.tar tmp I need the --dereference option here to follow the links I just made in tmp. The reason I make the links in the first place is to store the directories with a different name in the archive than they have on the filesystem. Until now it has worked fine. However, I now have the situation that /path/to/app1 also contains links, and those I don't want to follow. Is this possible with some changes to the tar command? Or do I need to completely switch around the way I build the archive?

    Read the article

  • How can I set deadline as the I/O scheduler for USB Flash devices by using udev rules?

    - by ????
    I have set CFQ as the default I/O scheduler. I often get bad performance when I write data into a Flash device. This is resolved if I use deadline as the I/O scheduler for USB Flash devices. I can't always change the scheduler manually, right? I think writing udev rules is a good idea. Can someone please write rules for me? I want: When I plug in a USB device, detect the type of the device. If it is a portable USB hard disk, do nothing (I think if a device has more than one partitions, it always a portable hard disk. If it is a USB Flash device, set deadline as it's scheduler.

    Read the article

  • Co-worker told me ubuntu server LTS is not built for production env

    - by Sandro Dzneladze
    I was confronted with a very vague statement today, but unfortunately I had no clue how to respond or analyse this. Co-worker told me ubuntu server LTS is not built for production env. What is there that makes it not built for production environment? (obviously no answer followed, so this is my question to you...) is it ...packages? ...ease of use? ...supported systems? ...stability? ...support? ...popularity?

    Read the article

  • Merging and sorting multiple files with "sort"

    - by NewbiZ
    Hello, I have a bunch of text logfiles in the following format: ID (17 characters) Timestamp (14 characters YYYYmmddHHMMSS e.g. "20060210100040" -> 2006/02/10 10:00:40) Random data (? characters) end of line The files are already sorted by timestamp. I need to get 1 log file with all the logs from multiple logs files, sorted by timestamp. Note that the log files are really huge, around 3-4G each (and there are dozens of them) I tried the following command: sort -s -m -t '|' -k1n,1n +17 -o data_sort.txt *.TXT Here is how I ended up with this command: -s : don't bother with tie results -m : merge all logs files -t '|' : there is no | in my logs, so the whole line should be field 1 -k1n,1n: sort on the first field as a numeric value +17 : the timestamp starts at index 17 -o : output file Actually... it fails miserably. The output file data_sort.txt is just the concatenation of all files, not sorted at all :( I would greatly appreciate if anyone could provide any help on this problem! Thanks

    Read the article

  • PHP 5 will not work in Centos 6 for me

    - by LaserBeak
    Just created a new install of Centos 6.0 64-bit on a virtual machine and running on Vmware workstation 8, windows host. yum install php service httpd restart And when trying to run a html file from the var\www\html dir which just has <?php phpinfo(); ?> in it or pointing browser to localhost. Nothing comes up. Also opened up httpd.conf and added: AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps Tried reinstalling, installing php-common, then php etc. to no avail. Otherwise going for the typical LAMP. installed: php.x86_64 5.3.2-6.el6_0.1 @updates php-cli.x86_64 5.3.2-6.el6_0.1 @updates php-common.x86_64 5.3.2-6.el6_0.1 @updates Yet to update to Centos 6.1 PHP5 is probably comes installed by default with Centos and I maybe stuffed it up by running yum install php?

    Read the article

  • Rails /tmp/cache/assets permissions issue using Debian virtual machine hosted on OS X Lion

    - by Jim
    I am running Parallels Desktop 7 on OS X Lion. I have a VM with Debian installed, and inside that VM I setup a Rails development environment. I am using Parallels Tools to share out my OS X home directory to the VM - the goal here is to run the Rails server on the VM, but host the files on OS X (so they are automatically backed up, and so I can use tools like Textmate to develop with). Everything seems to work with the shared directory - my Debian user can read, write, and execute files. However, when I cloned a recent Rails project from Git, I got an error message when it tried to compile the CSS assets. My symptoms are exactly the same as in the question: http://stackoverflow.com/questions/7556774/rails-sprocket-error-compiling-css-assest-chown-issue I believe this is permissions-based, but it is really weird. My entire Rails project directory has permissions set to 777 and my Debian user owns it. If I navigate into /tmp/cache/assets, those permissions are the same. However, the three-character directories Rails is creating (DCE, DA1, D05, etc...) are being created without write permissions! If I refresh the Rails page a few times, about 4 or 5 (with Rails creating new three-character directories every time), eventually it will create one of the directories with the proper 777 permissions and everything will work! This will persist until I make a change to the CSS files and it has to recompile. Does anyone have any idea what might be going on here? I can't fathom why it is creating temp directories with incorrect permissions, or why after a few refreshes the good permissions kick in and it works... It definitely seems to be an issue with the share, since if I move the project into a different directory on the VM, it seems to work fine. On the OS X side, I've given the shared folder 777 permissions as well, but no dice...any ideas? Update I've found that the number of times I need to refresh before it works is not random - it has to do with how many assets are being compiled. For example, if I edit one of my CSS files, and there are four CSS files in the app/assets/stylesheets directory, I have to refresh four times before the app will finally work without the operation not permitted error...

    Read the article

  • Force ntpd to make changes in smaller steps

    - by David Wolever
    The NTP documentation says: Under ordinariy conditions, ntpd adjusts the clock in small steps so that the timescale is effectively continuous and without discontinuities - http://doc.ntp.org/4.1.0/ntpd.htm However, this is not at all what I have noticed in practice. If I manually change the system time backwards or forwards 5 or 10 seconds then start ntpd, I notice that it adjusts the clock in one shot. For example, with this code: #!/usr/bin/env python import time last = time.time() while True: time.sleep(1) print time.time() - last last = time.time() When I first change the time, I'll notice something like: 1.00194311142 8.29711604118 1.0010509491 Then when I start NTPd, I'll see something like: 1.00194311142 -8.117301941 1.0010509491 Is there any way to force ntpd to make the adjustments in smaller steps?

    Read the article

  • clocksource tsc unstable

    - by amorfis
    Ok, now I have real server fault ;) After some time from booting (about one minute) my server hangs. All I can do is hard reset. Then after restart in /var/log/kern.log I can find: Jul 29 22:38:57 leonidas kernel: [ 90.729598] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 90.731252] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:57 leonidas kernel: [ 91.201461] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 91.201482] longhaul: Disabling ACPI C3 support. Jul 29 22:38:57 leonidas kernel: [ 91.204230] longhaul: Disabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.416133] longhaul: Failed to set requested frequency! Jul 29 22:38:58 leonidas kernel: [ 91.416152] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.960048] Clocksource tsc unstable (delta = -105611479 ns) I found some resources on the net, and it said to change clocksource, or disable ACPI. I tried disabling ACPI but it didn't help (but I noticed there was longer time before hanging). I can't change clock to hpet, because my system doesn't have such one. Output of cat /sys/devices/system/clocksource/clocksource0/available_clocksource: acpi_pm jiffies tsc My system is ubuntu server on VIA Epia hardware.

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • centos install / partitioning

    - by ServerSideX
    I'm using NOC-PS to remotely install Centos 6.2 via KVM / IPMI. I'm going to install cPanel as well and they recommend this layout /boot (99MB) swap (2x server RAM) / (remainder) In the o/s install profile within NOC-PS software, it shows as this: part /boot --fstype ext2 --size 250 part pv.01 --size 1 --grow volgroup vg pv.01 logvol / --vgname=vg --size=1 --grow --fstype ext4 --fsoptions=discard,noatime --name=root logvol /tmp --vgname=vg --size=1024 --fstype ext4 --fsoptions=discard,noatime --name=tmp logvol swap --vgname=vg --recommended --name=swap By the time the default partition setup was done installing Centos, I get this [root@server005 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg-root 532G 907M 504G 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 243M 28M 202M 13% /boot /dev/mapper/vg-tmp 1008M 34M 924M 4% /tmp [root@server005 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Dec 7 18:47:24 2012 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg-root / ext4 discard,noatime 1 1 UUID=58b31aaf-5072-4fb1-a858-33bc316fa793 /boot ext2 defaults 1 2 /dev/mapper/vg-tmp /tmp ext4 discard,noatime 1 2 /dev/mapper/vg-swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 My question is, how should the NOC-PS install profile look like to get the recommended cPanel partitioning? The server has 16GB RAM, dual 600GB SAS drives and will be used for cPanel shared hosting.

    Read the article

  • Can't mount hard drive. Ubuntu 12.04

    - by Sam
    I am trying to recover some pictures on my 320 GB Hard Disk, so I put in a Live Ubuntu CD and am in that right now. In the devices list, it shows my USB drive, but not my 320 GB Hard Disk. I can see the disk in Disk Utility (it says it's on /dev/sda), but it's not mounted, and it says it has a few bad sectors but it is OK. In Disk Usage Analyzer, it says my maximum capacity is 13.4 GB, so it's definitely not using the 320 GB Hard Disk. I tried the following: sudo mkdir /media/newhd (worked) sudo mount /dev/sda /media/newhd (didn't work. it says I must specify the filesystem type) I then tried: fsck.ext4 -f /dev/sda (didn't work. Said: Superblock invalid, trying to backup blocks. then: Bad magic number in super-block while trying to open /dev/sda. The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock) Does anyone have any ideas? The whole problem started when my Windows Vista said "Can't find operating system". Any ideas on how I can get on to my hard drive at /dev/sda?

    Read the article

  • How can I measure actual memory usage from my running processes?

    - by NullUser
    I have two servers, server1 and server2. Both of them are identical HP blades, running the exact same OS (RHEL 5.5). Here's the output of free for both of them: ### server1: total used free shared buffers cached Mem: 8017848 2746596 5271252 0 212772 1768800 -/+ buffers/cache: 765024 7252824 Swap: 14188536 0 14188536 ### server2: total used free shared buffers cached Mem: 8017848 4494836 3523012 0 212724 3136568 -/+ buffers/cache: 1145544 6872304 Swap: 14188536 0 14188536 If I understand correctly, server2 is using significantly more memory for disk I/O caching, which still counts as memory used. But both are running the same OS and if I remember correctly, I configured both with the same parameters when they were installed. I did a diff on /etc/sysctl.conf and they are identical. The problem is, I am collecting memory usage and other metrics over a period of time, (eg: vmstat, iostat, etc.) while a load is generated on the system. The memory used for caching is throwing off my calculations on the results. How can I measure actual memory usage from my running processes, rather than system usage? Is used - (buffers + cached) a valid way to measure this?

    Read the article

  • Open file in local text editor from within an SSH connection

    - by Sam
    I'm not a vim guy. I'd like to be able to open log files in Sublime Text when in an SSH connection from within Terminal. Is there a way I could do this? I'm thinking there must be a command or something that could copy the file over to a temporary directory in OS X and then open it in Sublime Text, and when I save it, it'll copy back to the original location through SSH; similar to how FileZilla does it. I'm on Mac OS X MT. The server I SSH into is running Ubuntu. I'm using Terminal.

    Read the article

  • Is there DBus command to set position of KDE panel?

    - by Liss
    I have single vertical KDE panel. When I switch from single monitor X screen back to triple monitor X screen (with xrandr) my KDE panel ends up on the right edge of middle monitor, instead of right edge of right monitor. Also, many windows are in wrong place after the switch, but I have a script which restores geometry of all windows (as is used to be in triple monitor state before I switched to single monitor mode), so this is not a problem. Unfortunately, it does not work for KDE panel - it stays in the wrong place. I guess I need to use DBus to restore KDE panel position (programmatically move it to right edge of right screen), I tried googling, but it seems it is not very well documented. Is there DBus command to set position of KDE panel?

    Read the article

  • Unable to use cloned VM, OpenSUSE, VirtualBox

    - by Kremchik
    I've cloned a VM and now while booting it I see a message: Trying manual resume from /dev/sda1 Invoking userspace resume from /dev/sda1 resume: libgcrypt version: 1.5.0 Trying manual resume from /dev/sda1 invoking in-kernel resume from /dev/sda1 Waiting for device /dev/disk/by-id/ata-VBOX_HARDDISK_.....-part2 to appear: ... Could not find /dev/disk/...-part2 Want me to fall back to /dev/disk/...-part2 (Y/n) If I press 'Y' it tries to boot again with failure, then exits to /bin/sh. If I press 'n' it exits to /bin/sh immediately. I've read a solution here: http://diggerpage.blogspot.com/2011/11/cannot-boot-opensuse-12-after-cloning.html but I don't understand how to access files on disk to edit /etc/fstab and /boot/grub/menu.lst?

    Read the article

  • Formatting pwd/ls for use with scp

    - by eumiro
    I have two terminal windows with bash. One is local on the client computer, another one has an SSH-session on the server. On the server, I am in a directory and seeing a file I would like to copy to my client using scp from the client. On the server I see: user@server:/path$ ls filename filename I can now type scp in the client shell, select and copy the user@server:/path from the server shell and paste to the client shell, then type slash and copy and paste the filename and append a dot to get: user@client:~$ scp user@server:/path/filename . to scp a file from the server to the client. Now I am searching for a command on the server, that would work like this: user@server:/path$ special_ls filename user@server:/path/filename which would give me the complete scp-ready string to copy&paste to the client shell. Something in the form echo $USER@$HOSTNAME:${pwd}/$filename working with relative/absolute paths. Is there any such command/switch combination or do I have to hack it myself? Thank you very much.

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >