Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 399/1048 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Change permission to /proc/net/ip_conntrack on Ubuntu server 9.10

    - by bjarkef
    Hi I have a script that needs to extract certain information form the /proc/net/ip_conntrack file once in a while. I do not wish to run this script as the root user. Default permissions for the file is: $ ls -lah /proc/net/ip_conntrack -r--r----- 1 root root 0 2010-03-28 12:18 /proc/net/ip_conntrack I can change it with: sudo chmod o+r /proc/net/ip_conntrack But that does not stick after a reboot. Is there some configuration file for file-permissions in the /proc directory in Ubuntu Server 9.10? Or do I just have to stick a chmod line in some startup script?

    Read the article

  • startx error no desktop manager

    - by WikiWitz
    I have Backtrack 5R2 KDE. I started recovery mode and did a failsafe xorg configuration. After that, I cannot load the KDE manager when I enter the startx command after logging in. Whenever I do a startx command (as root), the result resembles the following: This is not the actual output (I just drew this with MS paint because I cannot do a printscreen). The screen is just black with the icon in the upper left corner. The other pop-up menu appears when left-clicking the mouse. I tried the cp xorg.conf.failsafe xorg.conf advice from other websites with no luck. I have also tried the 'reconfigure option(s)' form the recovery mode with no success.

    Read the article

  • Error headers: ap_headers_output_filter() after putting cache header in htaccess file

    - by Brad
    Receiving error: [debug] mod_headers.c(663): headers: ap_headers_output_filter() after I included this within the htaccess file: # 6 DAYS <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=518400, public" </FilesMatch> # 2 DAYS <FilesMatch "\.(xml|txt)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> # 2 HOURS <FilesMatch "\.(html|htm)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> Any help is appreciated as to what I could do to fix this?

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • Can I automatically add a new host to known_hosts ?

    - by gareth_bowles
    Here's my situation; I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via SSH. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client. The problem I'm having is that the first SSH command run against a new virtual instance always comes up with an interactive prompt: The authenticity of host '[hostname] ([IP address])' can't be established. RSA key fingerprint is [key fingerprint]. Are you sure you want to continue connecting (yes/no)? Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.

    Read the article

  • File permission woes on an Ubuntu ec2 instance

    - by Pardoner
    I've set up an amazon ec2 instance and I'm have some file permission issues. I've created myself a new user and added myself to the following groups: adm:x:4:me,ubuntu sudo:x:27:me www-data:x:33:me,www-data ssh:x:108:me admin:x:111:me ubuntu:x:1000:www-data,me me:x:1001:me but when I cd /var/www I can't do simple commands without doing sudo. So I chown -R www-data:www-data /var/www to ensure that I'm in the owning group but I still have to type sudo for everything. If I sudo su www-data it works fine. Since I'm in the www-data group shouldn't I have the same privilages as www-data? One strange thing I'm noticing is that when I ls -l it list the owner but not the group names. Could this possibly be part of the issue? Is is posible for a directory to not be part of a group? drwxr-xr-x 4 www-data 4.0K Oct 24 16:39 . drwxr-xr-x 14 root 4.0K Oct 10 16:58 .. drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 admin.mywebsite.com drwxrwxr-x 2 www-data 4.0K Oct 4 00:29 mywebsite.com drwxrwxr-x 9 www-data 4.0K Oct 23 04:03 staging.mywebsite.com Edit : It appears I had some alias messing with my ls command. By calling \ls -l I can see that all my files are in the correct group.

    Read the article

  • How do I limit concurrent sftp / port forwarding logins

    - by Kyoku
    I have ssh set up so my users can only access sftp and port forwarding, how can I limit the number of concurrent logins on a per user basis? In my sshd_config I have UsePAM set to yes and in /etc/security/limits.conf I have: username - maxlogins 1 I also tried: username hard maxlogins 1 Neither of these works and the users can still log in multiple times.

    Read the article

  • How to start a service at boot time in ubuntu 12.04, run as a different user?

    - by Alex
    I have a server ClueReleaseManager which I have installed on a Ubuntu 12.04 system from a separate user (named pypi), and I want to be able to start this server at startup. I already have tried to create a simple bash script with some commands (login as user pypi, use a virtual python environment, start the server), but this does not work properly. Either the terminal crashes or when I try to ask the status of the service it is started and I am logged in as user pypi ...? So, here the question: What are the steps to take to make sure the ClueReleaseManager service properly starts up on boot time, and which I can control (start/stop/..) during runtime, while the service is running from a user pypi? Additional information and constraints: I want to do this as simple as possible Without any other packages/programs to be installed I am not familiar with the Ubuntu 12.04 init structure All the information I found on the web is very sparse, confusing, incorrect or does not apply to my case of running a service as a different user from root.

    Read the article

  • High fan speed with no reason

    - by Klaus
    For a few weeks, the fans of my Lenovo B590 laptop, running on Xubuntu 14, turn to high speed a few minutes after it is turned on. The fans won't speed down until I turn the computer off. This is quite strange, since This didn't happen before The temperatures are quite low (are they ?) $sensors Adapter: Virtual device temp1: +36.0°C (crit = +88.0°C) temp2: +30.0°C (crit = +126.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +37.0°C (high = +72.0°C, crit = +90.0°C) Core 0: +34.0°C (high = +72.0°C, crit = +90.0°C) Core 1: +31.0°C (high = +72.0°C, crit = +90.0°C) thinkpad-isa-0000 Adapter: ISA adapter fan1: 0 RPM pkg-temp-0-virtual-0 Adapter: Virtual device temp1: +37.0°C $sudo hddtemp /dev/sda /dev/sda: ST500LT012-9WS142: 33°C The computer is under low load: top - 08:30:15 up 16 min, 2 users, load average: 0.28, 0.23, 0.23 Tasks: 197 total, 1 running, 196 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.8 us, 0.5 sy, 0.0 ni, 98.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 3607944 total, 1973956 used, 1633988 free, 99660 buffers KiB Swap: 3744764 total, 0 used, 3744764 free. 789936 cached Mem The BIOS is up to date (and there are no fan settings in it) The fan is clean and dust-free Why would the BIOS turn the fans to high speed where there seem to be no reason for that ? It seems that we cannot control the fan manually with this model, so I guess the only solution is to understand why this happens.

    Read the article

  • How can I turn off font-antialiasing only for gnome-terminal, but not for other applications?

    - by dan
    I'm running GNOME (gnome-session under xmonad). I want to turn off antialiasing (i.e. use monochrome mode) for fonts in gnome-terminal. But I want to retain antialiasing for other applications, like Firefox. Is this possible? Antialiasing is great and almost necessary for using Firefox or Chrome. But it makes the fonts in gnome-terminal blurry at sizes around 12 or smaller. Otherwise, I'll just have to use xterm, which seems not to anti-alias its fonts under any circumstances.

    Read the article

  • Partition Label Problem after DD Image

    - by bobby
    After imaging a 100GB hard drive into an image file with dd, I dd'd the image to a larger hdd After boot get mkrootdev: label / not found I have gone in with finnix and relabeled the partition to the same label with e2label and still have problems. Has anyone resolved this before?

    Read the article

  • Make `mount` choose a device

    - by o_O Tync
    I have a mount point — let it be /media/question — and two possible devices: a physical HDD and a remote NFS folder. Sometimes I plug the device in physically, in other cases I mount it via NFS. Is there a way to specify both of them in fstab so that executing mount /media/question will preferably choose physical volume, and when it's not available — NFS?

    Read the article

  • compressing dd backup on the fly

    - by Phil
    Maybe this will sound like dumb question but the way i'm trying to do it doesn't work. I'm on livecd, drive is unmounted, etc. When i do backup this way sudo dd if=/dev/sda2 of=/media/disk/sda2-backup-10august09.ext3 bs=64k ...normally it would work but i don't have enough space on external hd i'm copying to (it ALMOST fits into it). So I wanted to compress this way sudo dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz ...but i got permissions denied. I don't understand.

    Read the article

  • How to jump to a particular flag in a Unix manpage?

    - by dotancohen
    When reading a Unix manpage in the terminal, how can I jump easily to the description of a particular flag? For instance, I need to know the meaning of the -o flag for mount. I run man mount and want to jump to the place where -o is described. Currently, I search /-o however that option is mentioned in several places before the section that actually describes it, so I must jump around quite a bit. Thanks.

    Read the article

  • How can I avoid SSH's host verification for known hosts?

    - by shantanuo
    I get the following prompt everytime I try to connect a server using SSH. I type "yes", but is there a way to aovid this? The authenticity of host '111.222.333.444 (111.222.333.444)' can't be established. RSA key fingerprint is f3:cf:58:ae:71:0b:c8:04:6f:34:a3:b2:e4:1e:0c:8b. Are you sure you want to continue connecting (yes/no)?

    Read the article

  • How can I register a custom protocol with xdg?

    - by julien
    I've been struggling this morning trying to associate an application with a custom protocol, namely emacsclient and org-protocol. I'm calling this protocol from a webbrowser bookmarklet, and I get the following behaviour : In chromium, the "Launch Application" dialog comes up, and calls xdg-open org-protocol://... which ends up firing a new chromium frame. In firefox, I've tried setting network.protocol-handler.app.org-protocol to an empty string or my emacsclient path, anyhow I get the following error message : "Firefox doesn't know how to open this address, because the protocol (org-protocol) isn't associated with any program" without even showing any external application selection dialog. I'm not using any desktop environment, so I need to make this work strictly with xdg, however, despite reading the shared mime info spec etc, I still can't fathom a working configuration.

    Read the article

  • Rebuilding LVM after RAID recovery

    - by Xiong Chiamiov
    I have 4 disks RAID-5ed to create md0, and another 4 disks RAID-5ed to create md1. These are then combined via LVM to create one partition. There was a power outage while I was gone, and when I got back, it looked like one of the disks in md1 was out of sync - mdadm kept claiming that it only could find 3 of the 4 drives. The only thing I could do to get anything to happen was to use mdadm --create on those four disks, then let it rebuild the array. This seemed like a bad idea to me, but none of the stuff I had was critical (although it'd take a while to get it all back), and a thread somewhere claimed that this would fix things. If this trashed all of my data, then I suppose you can stop reading and just tell me that. After waiting four hours for the array to rebuild, md1 looked fine (I guess), but the lvm was complaining about not being able to find a device with the correct UUID, presumably because md1 changed UUIDs. I used the pvcreate and vgcfgrestore commands as documented here. Attempting to run an lvchange -a y on it, however, gives me a resume ioctl failed message. Is there any hope for me to recover my data, or have I completely mucked it up?

    Read the article

  • Users in ubuntu; Cant figure it out

    - by Camran
    I am the only one who will have access to my website. Just installed my VPS and managed to get most stuff working. However, stuck on the "members" part. Currently, everything has been done as "root". I have read posts that I should create a user, because root isn't ideal. I have found thousand guides on how to create a user, but now what to do next. 1- Should I create a user with adduser username and then add the user to a group? But which group? 2- And will the user then be able to do everything as I have done logged on as "root"? 3- And somebody plz explain what "sudo" has to do with this? (if anything at all) Thanks

    Read the article

  • Adding user to chroot environment

    - by Neo
    I've created a chroot system in my Ubuntu using schroot and debrootstrap, based on minimal ubuntu. However whenever I can't seem to add a new user into this chroot environment. Here is what happens. I enter schroot as root and add a new user.(Tried both adduser and useradd commands) The username lists up in /etc/passwd file and I can 'su' into the new user. So far so good. When I log out of schroot, and re-enter schroot, the user I created has vanished!! There is no mention of that user in /etc/passwd either. How do I make the new user permanent?

    Read the article

  • crontab: question about a special case of the dash character in the time field spec

    - by mdpc
    In the SuSE /etc/crontab the entry to run the cron.{hourly,daily,monthly,weekly} scripts is coded as: -*/15 * * * * root test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons /dev/null 2&1 Notice that the very first character of the specification is a dash character (-), and this is NOT a typo. Can somebody explain what the time spec '-*/15' means? BTW, the stuff seems to be running fine. Thanks

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • The best way to hide data Encryption,Connection,Hardware

    - by Tico Raaphorst
    So to say, if i have a VPS which i own now, and i wanted to make the most secure and stable system that i can make. How would i do that? Just to try: I installed debian 7 with LVM Encryption via installation: You get the 2 partitions a /boot and a encrypted partition. When booting you will be prompted to fill in the password to unlock the encryption of the encrypted partition, Which then will have more partitions like /home /usr and swapspace which will automatically mount. Now, i do need to fill in the password over a VNC-SSL connection via the control panel website of the VPS hoster, so they can see my disk encryption password if they wanted to, they have the option if they wanted to look at what i have as data right? Data encryption on VPS , Is it possible to have a 100% secure virtual private server? So lets say i have my server and it is sitting well locked next to me, with the following examples covered bios (you have to replace bios) raid (you have to unlock raid-config) disk (you have to unlock disk encryption) filelike-zip-tar (files are stored in encrypted archives) which are in some other crypted file mounted as partition (archives mounted as partitions) all on the same system So it will be slow but it would be extremely difficult to crack the encryption. So to say if you stole the server. Then i only need to make the connection like ssh safer with single use passwords, block all incoming and outgoing connections but give one "exception" for myself. And maybe one for if i somehow lose my identity for the "exeption" What other overkill but realistic security options are available, i have heard about SElinux?

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >