Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 344/1051 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • CentOS 5.5 installation on disk image

    - by Dima
    Today, in order to install CentOS 5.5 I'm using kickstart script. I would like to install CentOS on different way: Create disk image (using dd command) Create filesystem on this disk image using mkfs.ext3 Install CentOS on this filesystem Make this disk image bootable (using grub-install) Copy the disk image to the physical hard disk (using dd command) I know to do all these items except 3. Is it possible to do it? If yes, how can I install CentOS on the disk image?

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • Squid parent cache for text/html only

    - by Salvador
    How do I configure the squid to only request text/html to the parent cache; right now I am using : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest on the second hand I get a lot of direct request that do not use the parent proxy: some queries go like FIRST_UP_PARENT and some like DIRECT, how do I tell the squid to always use parent for text/html BTW .. is a transparent proxy I have tried : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest acl elhtml req_mime_type -i ^text/html$ acl elhtml req_mime_type -i text/html cache_peer_access 127.0.0.1 allow elhtml cache_peer_access 127.0.0.1 deny all and it does not works Thanks in advance for the help.

    Read the article

  • what does 999 means here

    - by Brij Raj Singh
    I did ll on one of my directories, what does 999 means here, drwxr-xr-x 9 git mysql 4096 Nov 12 14:41 gitlab/ drwxr-xr-x 6 gitlab_ci 999 4096 Jun 28 13:36 gitlabci/ could these directories be fully owned like gitlab is owned by git and gitlabci to be owned by gitlab_ci I want something like this drwxr-xr-x 9 git git 4096 Nov 12 14:41 gitlab/ drwxr-xr-x 6 gitlab_ci gitlab_ci 4096 Jun 28 13:36 gitlabci/

    Read the article

  • Monitoring HP and Dell hardware in Gentoo.

    - by ewwhite
    I'm working in an environment that features a large number of Gentoo servers running on HP ProLiant and Dell PowerEdge equipment. While I've moved some of these systems to RedHat or CentOS for consistency, I'm still left with a good number of systems that will remain Gentoo. One of the issues I see with the Gentoo arrangement is lack of vendor-supported hardware monitoring. There doesn't seem to be an equivalent to the HP ProLiant Support Pack or Dell's agents for Gentoo. Is this simply something that you give up when using this distribution? How do you monitor hardware health and the like with Gentoo systems?

    Read the article

  • Fedora Installation with software repository in DVD does not work

    - by Raks
    I bought a new assembled PC with processor as Core-i3(2120) and Intel H61 motherboard and was trying to install Fedora 16 from a DVD. This DVD contains all the packages so that installation does not require to download packages from internet. I have used this DVD to install Fedora 16 offline many a times though in machines with different hardware configuration. But in this new machine when the installation reaches the stage wherein it asks for Software repository selection I select CD/DVD but the system fails to read the media and throws up an error that it cannot detect the media. THe LED in the DVD writer also indicate that the DVD is not being read. Now there is neither a problem with the DVD or the DVD drive because the installation started from the DVD itself. So what could the problem be, anything in the BIOS that is causing the problem, Is there any way I could utilize the packages already existing in the CD so that I save downloading the packages from DVD

    Read the article

  • How can I use the shell to make my mp3s a Shoutcast source?

    - by ChasonDehsotel
    I'm looking to stream a directory of mp3s from my audio source (Debian server) to my Shoutcast server. The idea is to have an archive playing in the instance that someone isn't broadcasting live. I'm not sure how to continue. I started with extensive Google-ing, and was unable to come up with a solution. Evan Carroll suggested I try here. I appreciate any insight y'all may have. __ On a side note, "users with less than 100 reputation can't create new tags. The tags 'shoutcast-source shoutcast broadcasting' are new. Try using existing tags instead." -- Who can create these?

    Read the article

  • Router failover not detecting outside interface link lost

    - by Matt
    Suppose I have two routers configured in master/slave configuration. They look something like this (addresses are not real ones) 123.123.123.10 <===> [eth0] Router 1 (10.1.1.2) [eth1] ===> +----------+ | 10.1.1.1 | ===> LAN 172.123.123.10 <===> [eth0] Router 2 (10.1.1.3) [eth1] ===> +----------+ The 10.1.1.1 is the default route for the Network (10.1.1.0). What's slightly different in this config to other's I've seen is that I don't have an external virtual IP. Also, the 10.1.1.1 addresses are in real life, public IP's (not private ones shown here). This is more of a router setup than a firewall setup so I'm not using NAT here. Now the issue that I'm having is that I can't see any way to configure UCARP or VRRP to monitor both eth0 & eth1 and fail over to the backup router should either of them go down. What I'm seeing is that if Router1 is the master and I unplug eth0 on router1, it doesn't fail over to router 2. However, it will if instead I unplug eth1 of router 1. In VRRP I see there is a cluster group, but it seems that for this to work you need to have virtual ip's or vrrp instances rather than actual interfaces assigned to it. I hope my explanation is clear. How do I get around this?

    Read the article

  • What are some good, free tools to run automated security audits for PHP code?

    - by James Simpson
    I've been looking for some time now and have come up short. The most promising I found was Spike PHP, which seems to no longer work. I'm looking to scan my code for potential risks of SQL Injection, XSS, etc. I've gone through most of my code manually, but with a few hundred thousand lines of code, I'm sure I missed things. If possible, are there any tools that can be downloaded and analyze code on my local machine rather than installing to the live server (this isn't a requirement if not)?

    Read the article

  • dpkg error code 1

    - by Prithvi Raj
    I am unable to add/remove any packages in ubuntu karmic I keep getting the following Errors were encountered while processing: crossplatfromui E: Sub-process /usr/bin/dpkg returned an error code (1) What do I do to completely remove this package ?

    Read the article

  • NFSv3 + ACL: mask is gone on clients

    - by Jorge Suárez de Lis
    I'm sharing a NFS folder among a user group. The default umask on the clients is 0700, and this is a problem because newly created files won't be readable/writable by another users. So, I'm using ACLs to force the umask 0770 on the shared folder, and this works OK on the server, but not on the clients. server # getfacl /export/proyectos getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:mask::rwx default:other::r-x server # getfacl /export/proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- As you see, the default (and also a specific on the second directory) mask ACLs are being applied. I mount the whole share on the client: 172.16.54.56:/export/proyectos on /proyectos type nfs (rw,noatime,rsize=131072,wsize=131072,acregmin=10,acl,nfsvers=3,addr=172.16.54.56) But the mask and default:mask ACLs are gone. client $ getfacl /proyectos/ getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/ # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:other::r-x client $ getfacl /proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx other::--- default:user::rwx default:group::rwx default:other::--- It lacks the default:mask and mask ACLs, the only ones that I've setted. So the proposed solution to enforce umask won't work for me. Why is happening this?

    Read the article

  • nginx + varnish + apache differente IPs in VirtualHost Apache

    - by zeusgod
    Hi, My idea is put NGINX as proxy to redirect to Varnish (cache static content) and then proxy to apache with a lot of VirtualHost in different IPs. My problems is that I would know how can configure Varnish to send access to correct IP, I am going explain: NGINX: Listen in: 10.10.10.10, 20.20.20.20 and 30.30.30.30 on ports: 80 and 443 Proxy redirect to Varnish 10.10.10.10:8080, 20.20.20.20:8080 and 30.30.30.30:8080 Varnish: Port: 8080 - THIS IS THE PROBLEM Proxy content not static to Apache on port 8000 - THIS IS THE OTHER PROBLEM Apache2: Listen in: 10.10.10.10:8000, 20.20.20.20:8000 and 30.30.30.30:8000 Response correct VirtualHost This is the idea. When I try with one IP only, all work correctly, because Varnish is only listen in one IP and port and send to backend in one IP and port too. Could you help me to configure Varnish or there is a best way to configure similar scenario please?

    Read the article

  • Moving symlinks into a folder based on id3 tags.

    - by Reti
    I'm trying to get my music folder into something sensible. Right now, I have all my music stored in /home/foo so I have all of the albums soft linked to ~/music. I want the structure to be ~/music/<artist>/<album> I've got all of the symlinks into ~/music right now so I just need to get the symlinks into the proper structure. I'm trying to do this by delving into the symlinked album, getting the artist name with id3info. I can do this, but I can't seem to get it to work correctly. for i in $( find -L $i -name "*.mp3" -printf "%h\n") do echo "$i" #testing purposes #find its artist #the stuff after read file just cuts up id3info to get just the artist name #$artist = find -L $i -name "*.mp3" | read file; id3info $file | grep TPE | sed "s|.*: \(.*\)|\1|"|head -n1 #move it to correct artist folder #mv "$i" "$artist" done Now, it does find the correct folder, but every time there is a space in the dir name it makes it a newline. Here's a sample of what I'm trying to do $ ls DJ Exortius/ The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ I'm trying to mv The Trance Mix 3 Wanderlust - DJ Exortius [TRANCE DEEP VOCAL TECH]@ into the real directory DJ Exortius. DJ Exortius already exists, so it's just a matter of moving it into the correct directory that's based on the id3 tag of the mp3 inside. Thanks! PS: I've tried easytag, but when I restructure the album, it moves it from /home/foo which is not what I want.

    Read the article

  • Updating PHP in CenOS

    - by Reza
    I followed this tutorial to update PHP from 5.3 version to 5.4. My distro is CentOS 5.5. after running following command: yum --enablerepo=remi,remi-test install httpd php php-common I get following error: --> Missing Dependency: php-common = 5.3.13-1.w5 is needed by package php-zts-5.3.13-1.w5.i386 (installed) Error: Missing Dependency: php-common = 5.3.13-1.w5 is needed by package php-zts-5.3.13-1.w5.i386 (installed) Error: php53-common conflicts with php-common You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. How can I solve this error?

    Read the article

  • How come my Apache can't read my media folder, but it can load the site? (static files don't work)

    - by Alex
    Alias /media/ /home/matt/repos/hello/media <Directory /home/matt/repos/hello/media> Options -Indexes Order deny,allow Allow from all </Directory> WSGIScriptAlias / /home/matt/repos/hello/wsgi/django.wsgi /media is my directory. When I go to mydomain.com/media/, it says 403 Forbidden. And, the rest of my site doesn't work because all static files are 404s. Why? The page loads. Just not the media folder. Edit: hello is my project folder. I have tried 777 all my permissions of that folder.

    Read the article

  • Can you have more than one ~/.ssh/config file?

    - by DrewVS
    We have a bastion server that we use to connect to multiple hosts, and our .ssh/config has grown to over a thousand lines (we have hundreds of hosts that we connect to). This is beginning to get a little unwieldy and I'd like to know if there is a way to break the .ssh/config file up into multiple files. Ideally, we'd specify somewhere that other files would be treated as an .ssh/config file, possibly like: ~/.ssh/config ~/.ssh/config_1 ~/.ssh/config_2 ~/.ssh/config_3 ... I have read the documentation on ssh/config, and I don't see that this is possible. But maybe someone else has had a similar issue and has found a solution.

    Read the article

  • Open a file with eclipse via terminal and focus eclipse window

    - by Rui Carneiro
    I am a webdeveloper and my current working tools are: Terminal (ssh, tailing logs, grep, git, etc) Eclipse (PDT, Javascript, etc) Firefox (Developer Toolbar + Firebug) The problem is that I hate using the eclipse navigation tree. For me it is a lot easier to go to the Terminal and do something like this: $ eclipse /var/www/myproject/long/path/lib/Driver/Sql.php The annoying part is that the eclipse window is not focused after this command. I have to manually click on the eclipse window (using mouse... :@ grrr) Anyway to force eclipse to be focused?

    Read the article

  • Problems using Mesa demos

    - by Rodnower
    Hello, I successfully installed Mesa with "yum install Mesa*" and downloaded MesaDemos-7.8.tar.gz archive. Now I try follow instructions from "Mesa3d.org - Download / Insall - Compiling and Installing - 1.5 Running the demos", but in progs/demos there is only *.c files, when I try to compile them, I get many similar errors like: gears.c:(.text+0x54): undefined reference to `glShadeModel' I guess that this is very noob question, and I understand that there is very simple solution, but I haven't any idea... In beggining of the file there are all necessary #includes: #include <math.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <GL/glut.h> So I have some questions: Is there some Mesa forum on the web? Is there some compiled demos? Is there some site with well described examples of Mesa using? What I need for compile those examples? I have CentOS 5 Thank you for ahead.

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • Process not Listed by PS or in /proc/

    - by Hammer Bro.
    I'm trying to figure out how to operate a rather large Java program, 'prog'. If I go to its /bin/ dir and configure its setenv.sh and prog.sh to use local directories and my current user account. Then I try to run it via "./prog.sh start". Here are all the relevant bits of prog.sh: USER=(my current account) _CMD="/opt/jdk/bin/java -server -Xmx768m -classpath "${CLASSPATH}" -jar "${DIR}/prog.jar"" case "${ACTION}" in start) nohup su ${USER} -c "exec ${_CMD} >>${_LOGFILE} 2>&1" >/dev/null & echo $! >${_PID} echo "Prog running. PID="`cat ${_PID}` ;; stop) PID=`cat ${_PID} 2>/dev/null` echo "Shutting down prog: ${PID} kill -QUIT ${PID} 2>/dev/null kill ${PID} 2>/dev/null kill -KILL ${PID} 2>/dev/null rm -f ${_PID} echo "STOPPED `date`" >>${_LOGFILE} ;; When I actually do ./prog.sh start, it starts. But I can't find it at all on the process list. Nor can I kill it manually, using the same command the shell script uses. But I can tell it's running, because if I do ./prog.sh stop, it stops (and some temporary files elsewhere clean themselves out). ./prog.sh start Prog running. PID=1234 ps eaux | grep 1234 ps eaux | grep -i prog.jar ps eaux >> pslist.txt (It's not there either by PID or any clear name I can find: prog, java or jar.) cd /proc/1234/ -bash: cd: /proc/1234/: No such file or directory kill -QUIT 1234 kill 1234 kill -KILL 1234 -bash: kill: (1234) - No such process ./prog.sh stop Shutting down prog: 1234 As far as I can tell, the process is running yet not in any way listed by the system. I can't find it in ps or /proc/, nor can I kill it. But the shell script can still stop it properly. So my question is, how can something like this happen? Is the process supremely hidden, actually unlisted, or am I just missing it in some fashion? I'm trying to figure out what makes this program tick, and I can barely prove that it's ticking! Edit: ps eu | grep prog.sh (after having restarted; so random PID) 50038 19381 0.0 0.0 4412 632 pts/3 S+ 16:09 0:00 grep prog.sh HOSTNAME=machine.server.com TERM=vt100 SHELL=/bin/bash HISTSIZE=1000 SSH_CLIENT=::[STUFF] 1754 22 CVSROOT=:[DIR] SSH_TTY=/dev/pts/3 ANT_HOME=/opt/apache-ant-1.7.1 USER=[USER] LS_COLORS=[COLORS] SSH_AUTH_SOCK=[DIR] KDEDIR=/usr MAIL=[DIR] PATH=[DIRS] INPUTRC=/etc/inputrc PWD=[PWD] JAVA_HOME=/opt/jdk1.6.0_21 LANG=en_US.UTF-8 SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass M2_HOME=/opt/apache-maven-2.2.1 SHLVL=1 HOME=[~] LOGNAME=[USER] SSH_CONNECTION=::[STUFF] LESSOPEN=|/usr/bin/lesspipe.sh %s G_BROKEN_FILENAMES=1 _=/bin/grep OLDPWD=[DIR] I just realized that the stop) part of prog.sh isn't actually a guarantee that the process it claims to be stopping is running -- it just tries to kill the PID and suppresses all output then deletes the temporary file and manually inserts STOPPED into the log file. So I'm no longer so certain that the process is always running when I ps for it, although the code sample above indicates that it at least runs erratically. I'll continue looking into this undocumented behemoth when I return to work tomorrow.

    Read the article

  • Trouble setting up PATH for Java on Debian

    - by milkmansrevenge
    I am trying to get Oracle Java 7 update 3 working correctly on Debian 6. I have downloaded and set up the files in /usr/java/jre1.7.0_03. I have also set the following two lines at the end of /etc/bash.bashrc: export JAVA_HOME=/usr/java/jre1.7.0_03 export PATH=$PATH:$JAVA_HOME/bin Logging in as other users and root is fine, Java can be found: chris@mc:~$ java -version java version "1.7.0_03" Java(TM) SE Runtime Environment (build 1.7.0_03-b04) Java HotSpot(TM) 64-Bit Server VM (build 22.1-b02, mixed mode) However there are two cases where Java cannot be found as detailed below. Note that both of these worked fine when I have previously installed OpenJDK Java 6 via aptitude, but I need Oracle Java 7 for various reasons. Most importantly, I cannot run commands as another user via su, despite the PATH showing that Java should be present. The user was created with adduser chris root@mc:~# su chris -c "echo $PATH" /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/java/jre1.7.0_03/bin:/bin root@mc:~# su chris -c "java -version" bash: java: command not found root@mc:~# su chris -c "/usr/java/jre1.7.0_03/bin/java -version" java version "1.7.0_03" ... How can it be in the PATH but not be found? Update 05/04/2012: explained by Daniel, to do with it being a non-interactive shell so files such as /etc/profile and /etc/bash.bashrc are not executed. Doing a full swap to that user and running Java works: root@mc:~# su chris chris@mc:/root$ java -version java version "1.7.0_03" ... I run a script on start up which exhibits similar but slightly different problems. The script is located in /etc/init.d/start-mystuff.sh and calls a jar: #!/bin/bash # /etc/init.d/start-mystuff.sh java -jar /opt/Mars.jar I can confirm that the script runs on start up and the exit code is 127, which indicates command not found. Inserting a line to print/save the PATH shows that it is: /sbin:/usr/sbin:/bin:/usr/bin This second problem isn't as important because I can just point directly to the Java executable in the script, but I am still curious! I have tried setting the full PATH and JAVA_HOME explicitly in /etc/environment which didn't help. I have also tried setting them in /etc/profile which doesn't seem to help either. I have tried logging in and out again after setting PATH in the various locations (duh!). Anyway, long post for what will probably have a simple one line solution :( Any help with this would be greatly appreciated, I have spent far too long trying to fix it by myself. Motivation The first problem may seem obscure but in my system I have users that are not allowed SSH access yet I still want to run processes as them. I have a ton of scripts operating in this way and don't want to have to change them all.

    Read the article

  • Is the field BusID necessary in XF86Config?

    - by Greg
    Hello, I am using a cluster of machines running on Ubuntu 10.04 LTS which are supposed to be homogeneous, but apparently they are not. In particular, I am configuring the X server on these machines, and I pushed a /etc/X11/XF86Config that includes the following section: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:5:0:0" EndSection The problem is that the BusID of the graphic card is PCI:5:0:0 for some machines, and PCI:3:0:0 for others. Is there a way that the X server automatically detects the appropriate Device (based on the name for instance)? Thanks,

    Read the article

  • Apache logging issues

    - by Dan
    I'm trying to parse apache log files, but I'm finding some strange results and I'm not sure what they mean. Hopefully someone can provide some insight. (all of the IP addresses were altered. none actually start with 192, I didn't figure the search engines mattered though.) In the first example, multiple ip addresses are showing up in the host field: 192.249.71.25 - - [04/Aug/2009:04:21:44 -0500] "GET /publications/example.pdf HTTP/1.1" 200 2738 192.0.100.93, 192.20.31.86 - - [04/Aug/2009:04:21:22 -0500] "GET /docs/another.pdf HTTP/1.0" 206 371469 What causes this? Does it have to do with proxy servers? Is there a way to have Apache only log one? In the second example, a bunch of information is just completely missing! What would cause this? msnbot-65-55-207-50.search.msn.com - - [29/Dec/2009:15:45:16 -0600] "GET /publications/example.pdf HTTP/1.1" 200 3470073 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" 266 3476792 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 594 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 285 4195 - - - - "-" - - "-" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; InfoPath.1)" 299 109218 crawl-17c.cuil.com - - [29/Dec/2009:15:45:46 -0600] "GET /publications/another.pdf HTTP/1.0" 200 101481 "-" "Mozilla/5.0 (Twiceler-0.9 http://www.cuil.com/twiceler/robot.html)" 253 101704 My CustomLog configuration says: LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\" %I %O" common

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >