Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 446/1048 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • How to serve a .php file locally?

    - by isomorphismes
    This part of the PHP documentation says that I should be able to make a small, fake server to serve up some local .php files in a folder using php -S localhost:8000 . But when I try that I get the following error: Usage: php [options] [-f] <file> [--] [args...] php [options] -r <code> [--] [args...] php [options] [-B <begin_code>] -R <code> [-E <end_code>] [--] [args...] php [options] [-B <begin_code>] -F <file> [-E <end_code>] [--] [args...] php [options] -- [args...] php [options] -a What am I doing wrong?

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • Choose source interface for PPTP VPN on Ubuntu

    - by Emyl
    I have an Ubuntu Virtualbox guest with two network interface, eth0 (NAT) and eth1 (bridged). I want to connect to a PPTP VPN using eth1, but I don't know how to specify which interface to use. If i just try: sudo pon myvpn nodetach It fails with: Using interface ppp0 Connect: ppp0 <--> /dev/pts/1 Modem hangup Connection terminated. Looking at routes with route seems to indicate that eth0 is being used: x.x.x.x.no 10.0.2.2 255.255.255.255 UGH 0 0 0 eth0

    Read the article

  • Tail and wildcard characters

    - by Mitch
    I want to get the last 10 lines of multiple files. I know they all end with "-access_log". So I tried: tail -10 *-access_log But this gives me an error, where as: tail -10 file-* Gives me the output I'd expect. I would think this probably has more to do with BASH then tail. However commands like: cat *-access_log Work fine. Any suggestions?

    Read the article

  • udev rule not being executed

    - by jyavenard
    I have the following device that udevadm lists as: looking at device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0/ttyUSB0/tty/ttyUSB0': KERNEL=="ttyUSB0" SUBSYSTEM=="tty" DRIVER=="" looking at parent device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0/ttyUSB0': KERNELS=="ttyUSB0" SUBSYSTEMS=="usb-serial" DRIVERS=="pl2303" ATTRS{port_number}=="0" looking at parent device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0': KERNELS=="6-2:1.0" SUBSYSTEMS=="usb" DRIVERS=="pl2303" ATTRS{bInterfaceNumber}=="00" ATTRS{bAlternateSetting}==" 0" ATTRS{bNumEndpoints}=="03" ATTRS{bInterfaceClass}=="ff" ATTRS{bInterfaceSubClass}=="00" ATTRS{bInterfaceProtocol}=="00" ATTRS{supports_autosuspend}=="1" So I created the rule: KERNEL=="ttyUSB0", SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", DRIVERS=="pl2303", KERNELS=="6-2:1.0", SYMLINK+="cc128serial" this doesn't work. However if I do: KERNEL=="ttyUSB0", SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", DRIVERS=="pl2303", SYMLINK+="cc128serial" then it works. I tried with KERNELS=="6*" etc.. to no available any ideas ? thanks

    Read the article

  • debian lenny : problem modifying static ip

    - by supertiti
    hello all, i'm trying to change a static ip assigned to a debian VM. I modified the /etc/network/interfaces file but my debian doesn't seem to like the new settings currently the machine's ip is set to 192.168.1.136 and i want the machine's ip to be set to 192.168.1.8 here's my modified /etc/network/interfaces : auto lo iface lo inet loopback allow-hotplug eth0 auto eth0 iface eth0 inet static address 192.168.1.8 gateway 192.168.1.1 netmask 255.255.255.0

    Read the article

  • Routing Traffic With OpenVPN

    - by user224277
    Few minutes ago i configured my VPN server, and actually I can connect to my VPN but all trafic is going through my normal home network. On my OpenVPN application I've got an information : Server IP: **.185.***.*10 Client IP: 10.8.0.6 Traffic: 7.3 KB in, 5.6 KB out Connected: 10 June 2014 19:21:59 So everything is connected but how I can setup on windows 7 that all trafic have to go through OpenVPN network card ?? Client setting : client dev tun proto udp # enter the server's hostname # or IP address here, and port number remote **.185.***.*10 1194 resolv-retry infinite nobind persist-key persist-tun # Use the full filepaths to your # certificates and keys ca ca.crt cert user1.crt key user1.key ns-cert-type server comp-lzo verb 6 Server setting : port 1194 proto udp dev tun # the full paths to your server keys and certs ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh2048.pem cipher BF-CBC # Set server mode, and define a virtual pool of IP # addresses for clients to use. Use any subnet # that does not collide with your existing subnets. # In this example, the server can be pinged at 10.8.0.1 server 10.8.0.0 255.255.255.0 # Set up route(s) to subnet(s) behind # OpenVPN server push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" ifconfig-pool-persist /etc/openvpn/ipp.txt keepalive 10 120 status openvpn-status.log verb 6 and sysctl : net.ipv4.ip_forward=1 Thank you for your time and help.

    Read the article

  • Is there a BSD equivalent to "!!"?

    - by CT
    I often find myself issuing a command that I do not have the proper elevated privileges for. On Ubuntu I could use sudo !! This would issue the same command with sudo privlidges. Is there an equivalent on OpenBSD? Edit: I should have been more specific on what version of OpenBSD. I am using OpenBSD 4.8 where sudo seems to be installed by default. I have already created a user besides root and edited my sudoers file to allow for that user to use sudo. My question is, is there already a built-in shortcut for the "!!" to use previous command.

    Read the article

  • "No such file or directory"?

    - by user1509541
    Ok, so I have a VDS laying around, and I thought I would turn it into a TF2 game server. When I connect to my server through PuTTY, and use wget to download the package "hldsupdatetool.bin" from Steampowered.com. I go to run it and it says "No such file or directory found". When I use "ls" to see what files are in directory, it lists "hldsupdatetool.bin" as being in the directory. So, why is it saying it's not there? This has been a headache for the past 2 days. It's returning: root@10004:~# wget http://www.steampowered.com/download/hldsupdatetool.bin --2012-07-08 06:04:49-- http://www.steampowered.com/download/hldsupdatetool.bin Resolving www.steampowered.com... 208.64.202.68 Connecting to www.steampowered.com|208.64.202.68|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3513408 (3.4M) [application/octet-stream] Saving to: “hldsupdatetool.bin.3” 100%[======================================>] 3,513,408 2.45M/s in 1.4s 2012-07-08 06:04:51 (2.45 MB/s) - “hldsupdatetool.bin.3” saved [3513408/3513408] root@10004:~# chmod +x hldsupdatetool.bin.3 root@10004:~# ./hldsupdatetool.bin.3 -bash: ./hldsupdatetool.bin.3: No such file or directory root@10004:~# More: root@10004:~# ls ffmpeg-packages hldsupdatetool.bin.1 hldsupdatetool.bin.3 hldsupdatetool.bin hldsupdatetool.bin.2 setup.sh root@10004:~# ls -la total 13828 drwx------ 4 root root 4096 Jul 8 06:04 . drwxr-xr-x 21 root root 4096 Jul 8 05:57 .. -rw------- 1 root root 8799 Jul 8 06:26 .bash_history -rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc -rw-r--r-- 1 root root 4 Jul 2 19:39 .custombuild drwxr-xr-x 2 root root 4096 Jul 4 18:49 ffmpeg-packages ---x--xrwx 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.1 -rw-r--r-- 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.2 -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.3 -rw-r--r-- 1 root root 140 Nov 19 2007 .profile -rw------- 1 root root 1024 Jul 2 19:49 .rnd -rwxr-xr-x 1 root root 38866 May 23 22:02 setup.sh drwxr-xr-x 2 root root 4096 Jul 2 19:44 .ssh root@10004:~#

    Read the article

  • curl makes a site work externally once run locally (apache)

    - by Kyle_at_NU
    Currently when I visit mysite.mydomain.com external to the local network I get in the browser: This is the default web page for this server. Nothing to see here. This is not even the "It Work's" Apache page. Then if locally (Apache2 on Ubuntu Server 12.04 with curl installed ) type: curl mysite.mydomain.com I get the site I expect. Then the next time i visit the page externally I get the correct site. Has anyone seen this before? Tips/Suggestions?

    Read the article

  • What steps should I take to secure Tomcat 6.x?

    - by PAS
    I am in the process of setting up an new Tomcat deployment, and want it to be as secure as possible. I have created a 'jakarta' user and have jsvc running Tomcat as a daemon. Any tips on directory permissions and such to limit access to Tomcat's files? I know I will need to remove the default webapps - docs, examples, etc... are there any best practices I should be using here? What about all the config XML files? Any tips there? Is it worth enabling the Security manager so that webapps run in a sandbox? Has anyone had experience setting this up? I have seen examples of people running two instances of Tomcat behind Apache. It seems this can be done using mod_jk or with mod_proxy... any pros/cons of either? Is it worth the trouble? In case it matters, the OS is Debian lenny. I am not using apt-get because lenny only offers tomcat 5.5 and we require 6.x. Thanks!

    Read the article

  • which user is the website host

    - by Kossel
    I m learning about server, and I'm configuring nginx mysql php wordpress. the server distro is debian 6. I created a new user and I wish each user is the owner of the site folder /var/www/site.one so I chown -R kossel:kossel site.one my problem is, my wordpress only work if I chmod 644 wp-config.php, which all can read wordpress site suggest that file should be 640. and my question is: when someone open mydomain.com, wordpress has to access wp-config.php file, but which user is it actually using to "read" that file? root? user kossel? anyone else? how can I properly give it permission or owner??

    Read the article

  • why is this happening?-"dhcpcd will not work correctly unless run as root"

    - by user330317
    i have installed archlinux and gnome on virtualbox. had no problem connecting to internet but now after installing gnome and rebooting there is no internet connection after following instructions from archwiki,i have tried . but i cant figure out the problem please help. host-63drhd% sudo netctl status enp0s3 ? [email protected] - Networking for netctl profile enp0s3 Loaded: loaded (/usr/lib/systemd/system/[email protected]; static) Active: inactive (dead) Docs: man:netctl.profile(5) host-63drhd% sudo netctl enable enp0s3 Profile 'enp0s3' does not exist or is not readable host-63drhd% sudo dhcpcd dhcpcd[1486]: sending commands to master dhcpcd process host-63drhd% dhcpcd dhcpcd[1543]: control_open: Permission denied dhcpcd[1543]: dhcpcd will not work correctly unless run as root dhcpcd[1543]: open `/run/dhcpcd.pid': Permission denied dhcpcd[1543]: control_start: Permission denied dhcpcd[1543]: version 6.3.2 starting dhcpcd[1543]: enp0s3: if_init: Permission denied dhcpcd[1543]: enp0s8: if_init: Permission denied dhcpcd[1543]: no valid interfaces found dhcpcd[1543]: no interfaces have a carrier dhcpcd[1543]: forked to background, child pid 1544

    Read the article

  • How to copy a file to a remote server using the command line?

    - by cool_cs
    I am trying to copy a file from my desktop to my remote server using the sudo command. I am doing this from the remote machine since I know the password for this machine and I do not have a password for my local machine. sudo scp donj@localhost:/Desktop/my.cnf user@remotemachine:/app/MySQL/my.cnf This does not work however. I want to overwrite the my.cnf file in the MySQL directory. I tried the su command but I do not have the password to become a super user.

    Read the article

  • Change ip route metric

    - by notphunny
    I'm constantly switching between eth0 and wlan0 interface on my archlinux because I often change OpenWrt fw images on my second router (which isn't connected anywhere). So I have problem with my routes when I'm connected to my wlan and want to connect with Ethernet to my router. Both routers are on 192.168.1.1/24 and after connecting to my Ethernet profile eth0 route becomes default one (which is ok for the time), because of smaller metric I guess. So I'm interested, how can I change routes metric so my applications can be connected to the internet (through out wlan)? Maybe there is solution not to use Default Gateway on Ethernet profile, however I still want to know how to change metric. Or default route if there are more then one.

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • sudoers entries

    - by Pochi
    Is there a way to have a sudoers entry that allows executing of only a particular command, without any extra arguments? I can't seem to find a resource that describes how command matching works with sudoers. Say I want to grant sudo for /path/to/executable arg. Does an entry like the following: user ALL=(ALL) /path/to/executable arg strictly allow sudo access to a command exactly matching that? That is, it doesn't grant user sudo privileges for /path/to/executable arg arg2?

    Read the article

  • Free web gallery installation that can use existing directory hierarchy in filesystem?

    - by user1338062
    There are several different free software gallery projects (Gallery, Coppermine, etc), but as far as I know each of those creates a copy of imported images in their internal storage, be it directory structure or database). Is there any gallery software that would allow keeping existing directory hierarchy of media files (images, videos), as-is, and just store the meta-data of them in a database? I guess at least various NAS solutions ship with software like this.

    Read the article

  • Improving sound quality with remote ESD server

    - by cuu508
    Hi, I'm investigating low-budget ways to get audio from my PC (Ubuntu) to HiFi without wires. I'm currently testing a setup where Asus WL-500gP wireless router runs ESD daemon and has attached USB soundcard which is then plugged into HiFi. I'm testing playback on PC with mpg123-esd and Spotify under Wine. The sound is there, latency is unexpectedly low, but I also hear occassional clicks and some distortion from time to time. I suppose that's because of the low latency and wireless streaming of uncompressed audio--any packet drops, CPU temporarily being busy etc. will cause clicks in sound output. Is there a way around this problem, increasing latency / buffer size somehow perhaps? Streaming using shoutcast protocol seems to be a way out but I have feeling that would be a complex and brittle setup.

    Read the article

  • Started an application through SSH, command line now gone, what happens next?

    - by Chris Dutrow
    Context: This is a very basic question Using Putty and SSH for the first time to do some serious server setup and run into the situation where I have started a process that I do not want to stop. The process is the gunicorn WSGI HTTP Server (running on Centos 6.3). The command I used to start the process is (as per their Quick Start): gunicorn -w 4 myapp:app At this point in the work session, I have lost the command prompt. This must be such a non-issue that it doesn't even enter into an experienced user's consciousness. But unfortunately at my level of experience, I am left with several fundamental questions: Does the fact that I have lost the command prompt mean that the process is still running? How do I get back to the command prompt without killing the process? How do I come back and monitor the process later? How do I eventually kill the process? Any help is appreciated, thanks so much!

    Read the article

  • Is data=journal on a separate device on Ext4 as good as using a RAID controller with battery backed cache for file system consistency?

    - by Jeff Strunk
    It seems to me that data=journal prevents file system inconsistency in the case of power failure. Using it with a dedicated journal device mitigates the performance penalty of writing the data twice. A power outage would still lose the data that is currently being written to the journal, but the file system on disk would always be consistent. If that amount of loss is acceptable, is a RAID controller with battery backed cache really worthwhile?

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >