Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 495/974 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • Is there a way to map a local drive letter in a Virtual PC Guest O/S to a host drive?

    - by Clay Nichols
    I have a bunch of programming projects on my P:\ drive (on Windows 7) I'm now doing some programming within Virtual PC Windows XP Mode and I'd like to "call" that drive, within the Win XP guest, the P: drive. I've mapped drive letter P: to "network" drive on the Host but that goes across the network so it's very slow. I tried using the SUBST command but it wouldn't take the \tsclients\p as a parameter. Basically, the command line interpreter (is that DOS on Win 7 ??) doesn't recognize that directory (\tsclients\p)

    Read the article

  • monitor internet bandwidth

    - by enriquev
    Hello, I'm looking for a windows tool that can tell me who is using bandwith. As of now I've setup so that the switch where all pcs are connected, mirrors the router's traffic to my pc, meaning that from my NIC I am able to see all outgoing and incoming internet connections. This works, I have used NIMAS (http://www.vmware.com/appliances/directory/200) and I am able to see internet traffic. Now what I am looking for is something even more simple, where I can see what computers are using what banwidth, live.

    Read the article

  • adding dynamic subdomains to my webserver?

    - by Solomon Saleh
    im trying to add a wildcard subdomain system to my webserver, but its still not working, this is the steps i took: i made a new file vhost.conf in the directory var/www/vhosts/www.example.com/conf/vhost.conf and i put ServerAlias *.domain.com then second of all i made a new dns wildcard on plesk CNAME *domain.com example.com and then i edited my .htaccess file Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^(^.*)\.example.com RewriteRule (.*) user.php?user=%1 normally my url would be http://www.example.com/user.php?user=solomon but now i want to like this http://solomon.example.com but the steps i took still deosnt work :)) whats happening here

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Our VPS is being used as a Warez mule

    - by Mikuso
    The company I work for runs a series of ecommerce stores on a VPS. It's a WAMP stack, 50gb storage. We use an archaic piece of ecommerce software which operates almost entirely client-side. When an order is taken, it writes it to disk and then we schedule a task to download the orders once every 10 minutes. A few days ago, we ran out of disk space, which caused orders to fail to be written. I quickly hopped on to delete some old logs from the mailserver and freed up a couple of GB pretty quickly, but I wondered how we could fill up 50gb will nothing much more than logs. Turns out, we didn't. Hidden deep within the c:\System Volume Information directory, we have a stack of pirated videos, which seem to have appeared (looking at the timestamps) over the past three weeks. Porn, American Sports, Australian cooking shows. A very odd collection. Doesn't look like an individual's personal tastes - more like the VPS is being used as a mule. We have a 5-attempts and you're blocked policy on our FTP server (plus, there is no FTP account with access to that directory), and the windows user account has had it's password changed recently. The main avenues are sealed - and logs can verify that. I thought I'd watch and see if it happened again, and yes, another cooking show has appeared this morning. I am the only one to know of this problem at my company, and only one of two with access to the VPS (the other being my boss, but no - it's not him). So how is this happening? Is there a vulnerability in some of the software on the VPS? Are the VPS owners peddling warez across our rented space? (can they do this?) I don't want to delete the warez in case it is seen as a hostile action against this outside force, and they choose to retaliate. What should I do? How do I troubleshoot this? Has this happened to anyone else before?

    Read the article

  • can't use periods in ServerName [Lion Apache installation]

    - by punchfacechamp
    I can access my host like this… http://keggyshop but can't use periods… http://keggyshop.dev here's my virtual host directive… <VirtualHost *:80> ServerName keggyshop ServerAlias keggyshop.dev DocumentRoot "~/sites/2012/keggy/web/pages/keggy/120528/sandbox/public" <Directory "~/sites/2012/keggy/web/pages/keggy/120528/sandbox/public"> Options Includes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> host file 127.0.0.1 keggyshop 127.0.0.1 keggyshop.dev traceroute for keggyshop… user$ traceroute keggyshop traceroute to keggyshop (192.168.1.184), 64 hops max, 52 byte packets 1 keggyshop (192.168.1.184) 1.188 ms 0.683 ms 0.747 ms traceroute for keggyshop.dev… user$ traceroute keggyshop.dev traceroute: Warning: keggyshop.dev has multiple addresses; using 184.106.15.239 traceroute to keggyshop.dev (184.106.15.239), 64 hops max, 52 byte packets 1 * 192.168.1.1 (192.168.1.1) 0.856 ms 0.568 ms 2 10.81.192.1 (10.81.192.1) 15.232 ms 7.002 ms 7.936 ms 3 gig-0-3-0-6-nycmnya-rtr2.nyc.rr.com (24.29.97.122) 7.962 ms 7.813 ms 7.712 ms 4 bun101.nycmnytg-rtr001.nyc.rr.com (184.152.112.107) 10.999 ms 14.001 ms 15.466 ms 5 bun6-nycmnytg-rtr002.nyc.rr.com (24.29.148.250) 11.231 ms 17.321 ms 12.745 ms 6 107.14.19.24 (107.14.19.24) 13.972 ms 11.704 ms 16.477 ms 7 ae-1-0.pr0.nyc30.tbone.rr.com (66.109.6.161) 9.237 ms 11.896 ms 107.14.19.153 (107.14.19.153) 7.481 ms 8 xe-5-0-6.ar2.ewr1.us.nlayer.net (69.31.94.57) 16.682 ms 11.791 ms 11.981 ms 9 ae3-90g.cr1.ewr1.us.nlayer.net (69.31.94.117) 12.977 ms 15.706 ms 9.709 ms 10 xe-5-0-0.cr1.ord1.us.nlayer.net (69.22.142.74) 30.473 ms 30.497 ms 31.750 ms 11 ae1-20g.ar1.ord6.us.nlayer.net (69.31.110.250) 36.699 ms 50.785 ms 35.957 ms 12 as19994.xe-1-0-7.ar1.ord6.us.nlayer.net (69.31.110.242) 34.723 ms 31.118 ms 29.967 ms 13 coreb.ord1.rackspace.net (184.106.126.138) 30.471 ms corea.ord1.rackspace.net (184.106.126.136) 33.392 ms 35.210 ms 14 core1-coreb.ord1.rackspace.net (184.106.126.129) 32.453 ms core1-corea.ord1.rackspace.net (184.106.126.125) 32.020 ms core1-coreb.ord1.rackspace.net (184.106.126.129) 32.417 ms 15 core1-aggr401a-3.ord1.rackspace.net (173.203.0.157) 31.274 ms 34.854 ms 30.194 ms

    Read the article

  • Where does $PATH get set in OS X 10.6 Snow Leopard?

    - by misbehavens
    I type echo $PATH on the command line and get /opt/local/bin:/opt/local/sbin:/Users/andrew/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin I'm wondering where this is getting set since my .bash_login file is empty. I'm particularly concerned that, after installing MacPorts, it installed a bunch of junk in /opt. I don't think that directory even exists in a normal Mac OS X install. Update: Thanks to jtimberman for correcting my echo $PATH statement

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • How to record a "macro" that saves web pages as PDF in OSX

    - by dwatson
    I frequently save pages as PDF from Chrome on OSX. The page is apple-P then click the PDF button and the "Save as PDF.." menu item. I always use the pre-filled filename and save in the default directory. Is it possible to save this as an automator script? If that is possible I woulld sure like to add this as a button on Chrome somewhere so I can just "save this for reading later" Thanks for any help.

    Read the article

  • Pinning based on origin of a reprepro repository.

    - by Shtééf
    I'm on Ubuntu 10.04, and trying to set up a repository using reprepro. I'd also like the pin everything in that repository to be preferred over anything else, even if packages are older versions. (It will only contain a select set of packages.) However, I cannot seem to get the pinning to work, and believe it has something to do with the repository side of things, rather than the apt configuration on the client. I've taken the following steps to set up my repository Installed a web server (my personal choice here is Cherokee), Created the directory /var/www/apt/, Created the file conf/distributions, like so: Origin: Shteef Label: Shteef Suite: lucid Version: 10.04 Codename: lucid Architectures: i386 amd64 source Components: main Description: My personal repository Ran reprepro export from the /var/www/apt/ directory. Now on any other machine, I can add this (empty) repository over HTTP to my /etc/apt/sources.list, and run apt-get update without any errors: Ign http://archive.lan lucid Release.gpg Ign http://archive.lan/apt/ lucid/main Translation-en_US Get:1 http://archive.lan lucid Release [2,244B] Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Hit http://archive.lan lucid/main Packages Hit http://archive.lan lucid/main Sources In my case, now I want to use an old version of Asterisk, namely Asterisk 1.4. I rebuilt the asterisk-1:1.4.21.2~dfsg-3ubuntu2.1 package from Ubuntu 9.04 (with some small changes to fix dependencies) and uploaded it to my repository. At this point I can see the new package in aptitude, but it naturally prefers the newer Asterisk 1.6 currently in the Ubuntu 10.04 repositories. To try and fix that, I have created /etc/apt/preferences.d/personal like so: Package: * Pin: release o=Shteef Pin-Priority: 1000 But when I try to install the asterisk package, it will still prefer the 1.6 version over my own 1.4 version. This is what apt-cache policy asterisk shows: asterisk: Installed: (none) Candidate: 1:1.6.2.5-0ubuntu1 Version table: 1:1.6.2.5-0ubuntu1 0 500 http://nl.archive.ubuntu.com/ubuntu/ lucid/universe Packages 1:1.4.21.2~dfsg-3ubuntu2.1shteef1 0 500 http://archive.lan/apt/ lucid/main Packages Clearly, it is not picking up my pin. In fact, when I run just apt-cache policy, I get the following: Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.lan/apt/ lucid/main Packages origin archive.lan 500 http://security.ubuntu.com/ubuntu/ lucid-security/multiverse Packages release v=10.04,o=Ubuntu,a=lucid-security,n=lucid,l=Ubuntu,c=multiverse origin security.ubuntu.com [...] Unlike Ubuntu's repository, apt doesn't seem to pick up a release-line at all for my own repository. I'm suspecting this is the cause why I can't pin on release o=Shteef in my preferences file. But I can't find any noticable difference between my repository's Release files and Ubuntu's that would cause this. Is there a step I've missed or mistake I've made in setting up my repository?

    Read the article

  • Batch convert divx to iPhone format

    - by Kelsey
    I am looking for free software to do batch conversions of divx video files to iPhone format. I have read the thread: http://superuser.com/questions/5784/looking-to-convert-video-to-iphone-format Handbrake works good for single files but it has very little customization with regards to files names and the batch functionality is not very good (or at least I can't get it to work very easily). Can anyone recommend a good batch converter? A script for Handbrake to do a batch for all in a specific directory would be useful even.

    Read the article

  • Policyd quotas setup

    - by sasap
    I have installed and configured policyd 2.0.8 according to INSTALL document in installation directory. My webui works fine, no problem reported in logs. And then I setup some quotas on Default policy. Quota: - Policy: Default - Name: Sending quota - Track: SenderIP:/24 - Period: 3600 - Verdict: REJECT - Data: - Disabled: no Limits: - Type: Message count - Counter limit: 2 - Disabled: no But, problem is that I'm still able to send as manu messages as I want. Perhaps I missing something?

    Read the article

  • Windows: Making Windows Explorer distinctive (changing background color or file/folder icons) for specific folder

    - by MacGyver
    Is there a way to change the background color in Windows 7 and Windows Server 2008 when the current folder being displayed meets a certain condition? Or is there a way I can change the icons of the files and folders within that folder so it's distinctive--similar to how Tortoise SVN does it for code checked out from a repository? Why? I'd like to do this for a deployment directory on a live server so users don't accidentally commit code to a certain environment. Like myself. :-)

    Read the article

  • the right options to traverse/download the pages/directories of a subdomain

    - by Lorraine Bernard
    Let's suppose exist a site with the following directories (subdomain) index.php |-sub1 |-index.php |-sub1sub1 |-index.php |-other.php |-sub1sub1sub1 |-sub2 |-index.php |- …. |-sub3 |- ... My question is: 1) how can I display properly locally the site of the sub1 subdomain (http://domain/sub1) 2) how can I get just the files and directory which are childs of sub1 (sub1sub1 and sub1sub1sub1 for example) I tried the following options (for wget) but it retrieves also the files and directories which are in sub2, sub3 etc.. wget -E -H -k -K -r http://domain/sub1/index.php

    Read the article

  • Running Apache for All Local Webpages

    - by waiwai933
    So I've enabled PHP on my Mac OS X 10.5 (Leopard), and it's working great so long as: 1) I place the file in the ~/Sites directory 2) I use the http://localhost/~user/example.php URL instead of the file:///Users/user/Sites/example.php I presume this is because unless both of those conditions are true, Apache is not involved, and thus neither is PHP. So is there any way to remove either of those conditions? (Well, really the latter, because the first is a symptom of the second)

    Read the article

  • Trouble using gitweb with nginx

    - by Rayne
    I have a git repository in a directory inside of /home/raynes/pubgit/. I'm trying to use gitweb to provide a web interface to it. I use nginx as my web server for everything else, so I don't really want to have to use another just for this. I'm mostly following this guide: http://michalbugno.pl/en/blog/gitweb-nginx, which is the only guide I can find via google and is really recent. fcgiwrap apparently isn't in Lucid Lynx's repositories, so I installed it manually. I spawn instances via spawn-fcgi: spawn-fcgi -f /usr/local/sbin/fcgiwrap -a 127.0.0.1 -p 9001 That's all good. My /etc/gitweb.conf is as follows: # path to git projects (<project>.git) #$projectroot = "/home/raynes/pubgit"; $my_uri = "http://mc.raynes.me"; $home_link = "http://mc.raynes.me/"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = $projectroot; # stylesheet to use $stylesheet = "/gitweb/gitweb.css"; # logo to use $logo = "/gitweb/git-logo.png"; # the 'favicon' $favicon = "/gitweb/git-favicon.png"; And my nginx server configuration is this: server { listen 80; server_name mc.raynes.me; location / { root /usr/share/gitweb; if (!-f $request_filename) { fastcgi_pass 127.0.0.1:9001; } fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } } The only difference here is that I've set fastcgi_pass to 127.0.0.1:9001. When I go to http://mc.raynes.me I'm greeted with a page that simply says "403" and nothing else. I have not the slightest clue what I did wrong. Any ideas?

    Read the article

  • Rsync: windows 7, synology: login error and permission denied error

    - by loonboon
    Good day to all of you all, I'm running into strange/stupid errors, and I hope anybody would be so kind to help me out. I have to admit, I am by no means a guru, so please bear with me :-) Situation: -Synology NAS (runs Linux), and Windows 7 desktop (1 normal/restricted user Lisa and 1 admin user). - Data from W7 desktop to be rsynced to synology: /volume1/home/Lisa/Backup - Rsync command: c:\cygwin\bin\rsync -avz /cygdrive/e/Lisa/ [email protected]:/volume1/homes/Lisa/Backup - I've set up ssh per these two threads: a. http://www.cesareriva.com/archives/102 b. http://www.cesareriva.com/archives/112 Now the horrors begin: - root is allowed to run the rsync succesfully, however, he doesn't login automatically (so I can not use rsync in W7 batchscripts, which is of course required). - Lisa is allowed to login automatically but he can not succesfully finish the rsync command because of permission errors: rsync change dir /volume1/homes/Lisa/Backup failed: permissions denied. This happens for each and every file and subdir rsync tries to create. However, the main directory (Backup) is created. When I try to copy files from windows explorer to the directory 'Backup' using the very same user Lisa everything goes smoothly. So, obviously, there is a permission problem somewhere; either my rsync-command isn't correct, or the folder permissions for homes/Lisa aren't correct (but, then again, Windows 7 copies files to that folder without any problems, so that does make me believe the homes/Lisa-permissions don't appear to be the problem). I also tried adding: --chmod=Dugo+x --chmod=ugo+r which I found somewhere on the web, to the rsync-command, but this didn't solve any problem and gave the exact errors. Would anybody please please help me on how to fix this? I am utterly frustrated about this, because I have been trying for 1 month to get everything to work and it simply doesn't work. I bought the big Synology to end the horrors of 20 external USB-disks for once and for all (we have many pictures and home vids of our deceased dogs and want to watch these, the horrors being 'what material is on what disk'). I'll gladly return the favour of somebody helping me out by buying you a nice beer (paypal), if you could end my misery. I am not extremely skilled on Linux (not at all :-( ) so if you could give an extra word when possible so I understand what to do, I'd be very grateful. I really hope somebody can help me out, Thank you in advance, Lisa

    Read the article

  • Delete a folder in the currently logged in user's profile

    - by Dan Cole
    I am trying to create a login script, or bat file to delete the folder located in the following directory. I would like the whole folder deleted with all of its contents "Juniper Networks". This is on a terminal server - C:\Users(username)\AppData\Roaming\Juniper Networks I can write a script for each username, but want a script to put in the startup folder that deleted the folder of the current user each time they login.

    Read the article

  • ASP.NET application within wordpress installation on IIS 7

    - by fro
    Hi all, a client has a wordpress installation on their IIS server located at www.mydomain.com. We would like to put our asp.net application in a subdirectory of the wordpress installation - at something like mydomain.com/asp. When I navigate to that directory I get a standard wordpress "Page not found" error. I am more familiar with htaccess, so how would I get wordpress to ignore a subdirectory in IIS? Thanks!

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >