Search Results

Search found 4985 results on 200 pages for 'moving'.

Page 92/200 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • Linux: Can I link multiple destinations via softlinks?

    - by kds1398
    Attempting to end up with something similar to this: $ ls -l lrwxrwxrwx 1 user group 4 Jun 28 2010 foo -> /home/bar lrwxrwxrwx 1 user group 4 Jun 29 2010 foo -> /etc/bar The intention is to be able to move a file to foo & have it go to both destination directories for now. The goal is to eventually unlink /home/bar link after confirming there are no issues with moving the files to /etc/bar. I am restricted in that I am unable to change or add to the process that moves the files.

    Read the article

  • Change Win7 taskbar position (overriding GPO, Registry Editor, Admin. Rights)

    - by diegocavazos53
    I run the computer center of my Faculty and the problem is that users manage to change the Win7 taskbar position. I don't really know how they do this as far as I have applied many group policies that are specific to the taskbar (like locking it). I have also disallowed users from entering new registry keys or executing the command prompt (or employing scripts). They have regular user rights and many Win7 tweaking programs need administrator rights to make changes to the GUI. So in other words, the taskbar is locked, there is a policy that sets its position to the lower part of the screen, users can't see the control panel, add registry keys, use the command prompt and don't have admin. rights. How do they keep moving the taskbar position to the upper part of the screen? Any ideas would be greatly appreciated. Thank you.

    Read the article

  • How do I schedule a task to run every hour indefinitely on Server 2003

    - by JMK
    I am moving a scheduled task from a Windows 7 machine to a Windows Server 2003 machine. On Windows 7 I can configure my task to run every hour indefinitely by setting up a custom trigger like so: On Windows Server 2003, I assume I need to use the advanced schedule options, and I have got this far: Whether I choose duration or time, my task seems to have an expiry date, how do I get this to run indefinitely? The only thing I can think of at the minute is to setup 24 schedules for my task, one for each hour but there has to be a more elegant way. Thanks

    Read the article

  • startup cassandra layout

    - by davidkomer
    We've got a relatively low-traffic site (~1K pageviews/day) hosted on a single server, and expect it to grow significantly over the next few years. I'm thinking of moving over to Rackspace CloudServer or EC2 and firing up 3 nodes (all on CentOS): 2 x Web (Apache) - with loadbalancer 1 x MySQL (for the Wordpress powered part) The question is where to put Cassandra right now... Should it sit on each Web node, or the MySQL node? My thought right now is to put it on Web nodes. It's my understanding that Cassandra has the benefits of fault-tolerance (i.e. if we take a node down, the site is still operational). So even with only 2 nodes, we'd have that benefit as opposed to just putting it on the MySQL node. Also, as we scale up and add another node, a cassandra instance can come along with it and the php can always run its queries on localhost. Is this a good idea?

    Read the article

  • How to Virtualize an OEM windows install.

    - by jumentous
    I've bought a new computer and like always it comes with windows 7 pre-installed. I'm a linux user by default but i still keep a virtual windows installation around. Is it possible to install my linux distribution, and use the OEM license that came with the computer to create the virtual instance? I have no intention of moving the license off the physical machine so i'm sure i could argue that i'm not violating the license but i don't expect that this would work and activate without great legal battles. So in the event that this doesn't work what other options do i have? Can i shrink the physical partition and have Qemu boot it? My thoughts are that windows would detect the change in hardware and fail. What can i do with this windows install as a linux user?

    Read the article

  • How do I use XQuartz with ssh on OS X?

    - by cwd
    I've downloaded the latest stable version of XQuartz on my Snow Leopard machine, and I'm trying to make an ssh connection with X forwarding but X11 keeps opening. How can I get OS X to use XQuartz? I had X11 installed I downloaded and installed XQuartz X11 is not open / running XQuartz is open and running I try and connect to a remote system using iTerm2: ssh user@remote -X X11 opens. XQuartz is still open, but I doubt it is doing anything. I also tried moving X11 to the trash but then the ssh connection will not complete, even though XQuartz is open. I also get the two warnings which I don't understand how to fix, even after reading the ssh man page. Warning: untrusted X11 forwarding setup failed: xauth key data not generated Warning: No xauth data; using fake authentication data for X11 forwarding.

    Read the article

  • How to choose an open source, Asterisk friendly firewall?

    - by Lucas
    I'm in pain. We are moving to a SIP based VOIP system and for whatever reason, we could not get our hosted Asterisk solution to work with our Sonicwall. Our VOIP provider gave up and is recommending an open source vendor, pfSense. A little background: We have about 30 users in our network. We use a few IPSec VPN connections for remote networks. I would like, but don't need, application layer filtering. We're active internet users, so properly traffic shaping is probably a concern. How can I tell if an open source firewall will handle VOIP setup smoothly with a hosted Asterisk system?

    Read the article

  • Is there a reason the partition tool GParted doesn't show a percentage finish number initially?

    - by Jian Lin
    I am using GParted more, and it seems to do a reliable work. I just wonder why if the tasks is to resize a 250GB partition to 190GB, and then create a new partition, the first 10, 15 minutes, there is only a blue bar moving left and right, but there won't be an indicator showing how many percent is done. Then after that 10 to 15 minutes, it does show 1:05:00 left to finish the job. Update: at first I thought there is no percentage or time remaining at all... but after waiting for 10, 15 mintues, it does show. I just wonder why it didn't show at first.

    Read the article

  • Delete the pendrive contents and also trash in Mac OSX?

    - by Warrior
    I am using a Macbook pro. I copied some data from my pen drive to my mac and deleted the content by moving it to trash. After that when I see the info of pen drive it give more value than the original value. If I cleaned the content of the trash only I am able to see the correct value of pen drive and able to copy data. Has Mac been designed like that or is there some other way to delete other than using the "move to trash" option? Thanks.

    Read the article

  • What is your company's stance on Developers using Laptops?

    - by codepunk
    I am a developer and my company is moving towards a "no laptop" policy in fear of them being lost or stolen and source code being compromised. Now I don't work for NASA, the military or anything labeled Top Secret but our code is very important to our business nonetheless (as all source code would be). I'll be honest, I disagree with this policy against laptops and wanted to see what others think. I'd like to know: What is your team/company's stance on laptops Your company's size and/or field (small, medium or large, Fortune 500, etc). Whether you've had to take any extra precautions (signing any additional legal, ensure your hard drive is encrypted, etc). Thanks!

    Read the article

  • Xen Disk Performence Issues

    - by user98651
    I'm currently using Xen PV on CentOS 5 with my domU's as flat files running on a hardware RAID controlled (write cache enabled) formatted with XFS. On the dom0 I can get about 500MB/s in a 2GB dd write from /dev/zero however on the domU's I'm lucky if I get 10MB/s (it is usually around half that). I've tried changing the disk scheduling to NOOP on the domU's, changed some mount parameters and tweaked the performance allocations of both the dom0 (prioritize CPU) and domU's (increase RAM and VCPU allocations). None of these steps have produced any noticeable change in performance. My instinct here is that it is not a hardware problem, due to the solid performance of the dom0. Any ideas on what might be causing this problem? I'm considering moving to LVM based domU's.

    Read the article

  • How to move an existing Win7 setup to a RAID-1 array?

    - by Matthew Scharley
    Currently I have a the following setup: Drive A/B: Identical HDD's Drive C: 2TB external drive with plenty of space currently. Hardware RAID controller Currently I have Drive A which is my boot drive (Win7), and Drive C which is a data drive, and Drive B which I only recently received and is blank. What is the best way of moving off of Drive A, setting up the hardware RAID and then migrating my data back onto the RAID array? I'm proficient with Linux, but I'm not sure if I can get away with simply using dd here. There is currently enough free space on Drive C to take 5-6 copies of a disk image of Drive A, so space isn't an issue.

    Read the article

  • How do I improve picture quality while streaming live football (soccer) from my Dell D600 to an HDTV?

    - by Bob
    I have fibre broadband with speeds up to 38mbs, my Dell D600 has its max 2gb ram and has an ATI Mobility RADEON 9000 4xAGP 32mb card in it...Its TV support it says is NTSC or PAL in S-video and composite modes with a 7-pin mini-DIN connector (optional S-video to composite video adapter cable) and a vga port which i am using at the moment... The laptop runs Windows XP, an 80g HD with only windows + necessary updates and anti virus software on it..... There is HDMI on the TV, but not the laptop Fairly slow moving and close up pictures arent too bad, but when the movment is fast(a shot on goal) or in the distance, I cant see the ball and the images go out of focus.

    Read the article

  • /srv/mm/Music (etc) Twonky won't scan here for media

    - by Hamid
    Is there something special about /srv/mm/ that Twonky server refuses to scan there? I previously had my system set up with all my Music, Video and Photo folders in /srv/mm shared by Samba, miniDLNA etc, with no problems. I came to install Twonky to replace miniDLNA and after two days of tearing my hair out, changing permissions and owners of the directories I ended up making a new directory at /multimedia and moved my Music, Video and Photo folders in there. Twonky then scanned them all straight away with no problems. I'm running Arch Linux (plugapps specifically) on a NAS. The solution is already implemented (moving the directory) I'm just wondering technically why Twonky might have refused to look for my media in the /srv/mm directories.

    Read the article

  • Best Server Ghost-Like Tool For Windows

    - by John Dibling
    I'm looking for advice on which tool we should use to clone servers. In the short term, we will be cloning identical hardware but in the long run we may want to create one image and replicate that on a different class of machine. For example, as new servers are released from Dell, we will want to continue to use the same image we already made. Right now our servers are Windows (Server 2008 & Server 2008 R2), but moving forward we may need Linux support as well. Ghost Solution Suite 2.5 seems to be the canonical tool. Are there alternatives? Recommendations/reviews?

    Read the article

  • DNS Help: Move domain, not mailserver

    - by Preserved
    I'm in the middle of launching a new website for an already-in-use domain. The domain has a complicated email system so we'd like to move that over to the new server a bit later on. Currently the domain DNS is managed by the current webhost. I plan on moving the DNS management back to Network Solutions, then point the A record to the new website's IP. However, currently the DNS has the MX record the same as the A record. When NetworkSolutions is managing the DNS, and I point the A record to the new IP, then the MX record can't be the A record.. Right now: A Record mydomain.com points to IP address 198.198.198.198 MX record mydomain.com points to IP address 198.198.198.198 What I want: A Record mydomain.com points to IP address of new server MX record somehow points to current existing mailserver Does this even make sense?

    Read the article

  • Reccomendation for tuning 100's of SQL Databases

    - by wayne
    I'm running several SQL servers, each running a few hundred multi-gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite a lot from database to database. What would be the best way to auto-index/profile/tune this large amount of databases? As there are at least 600 or more catalogs I cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine.

    Read the article

  • Entourage to Outlook Migration questions

    - by George Bluff
    I am currently migrating a users information from a pop email account to my exchange server. I have already migrated them over to my hosted exchange, and their email is following properly. Now, the user is moving from Entourage on a Mac (10.7) to Outlook 2010 on a PC (Windows 7). I was wondering what the easiest way was to migrate him since there is no .pst files. I have been able to get his email over by dragging the inbox from Entourage to the desktop, then converting the files to .eml using IMAPSize, importing them to Outlook Express (which will only work on Windows XP), then exporting to a pst, then importing in the new account. Takes awhile with large emails, but it works. The issue I am now having is for calendar items. I exported the calendar and got a folder with all the .ics files, but Outlook 2010 doesn't seem to have an easy way to import all of them. Any thoughts?

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by infotoknowledge
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide. docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin Any help is appreciated.

    Read the article

  • Automatically copy files out of directory

    - by wizard
    I had a user's laptop stolen recently during shipping and it was setup with windows live sync. The thief or buyer's kids took some photos of themselves and they were synced to the user's my documents. I had just finished moving the users files out of the synced my documents folder when I noticed this. Later they took some more photos and a video. I wrote up a batch script to copy files out synced directory every 5 minutes into a dated directory. In the end I ended up with a lot of copies of the same few files. Ignoring what windows livesync offers (at the time there was no way to undelete files - I've moved onto dropbox so this ins't really an issue for me) what's the best way to preserve changes and files from a directory? I'm interested in windows solutions but if you know of a good way on a *nix please go ahead and share.

    Read the article

  • How can I make the storage of C++ lambda objects more efficient?

    - by Peter Ruderman
    I've been thinking about storing C++ lambda's lately. The standard advice you see on the Internet is to store the lambda in a std::function object. However, none of this advice ever considers the storage implications. It occurred to me that there must be some seriously black voodoo going on behind the scenes to make this work. Consider the following class that stores an integer value: class Simple { public: Simple( int value ) { puts( "Constructing simple!" ); this->value = value; } Simple( const Simple& rhs ) { puts( "Copying simple!" ); this->value = rhs.value; } Simple( Simple&& rhs ) { puts( "Moving simple!" ); this->value = rhs.value; } ~Simple() { puts( "Destroying simple!" ); } int Get() const { return this->value; } private: int value; }; Now, consider this simple program: int main() { Simple test( 5 ); std::function<int ()> f = [test] () { return test.Get(); }; printf( "%d\n", f() ); } This is the output I would hope to see from this program: Constructing simple! Copying simple! Moving simple! Destroying simple! 5 Destroying simple! Destroying simple! First, we create the value test. We create a local copy on the stack for the temporary lambda object. We then move the temporary lambda object into memory allocated by std::function. We destroy the temporary lambda. We print our output. We destroy the std::function. And finally, we destroy the test object. Needless to say, this is not what I see. When I compile this on Visual C++ 2010 (release or debug mode), I get this output: Constructing simple! Copying simple! Copying simple! Copying simple! Copying simple! Destroying simple! Destroying simple! Destroying simple! 5 Destroying simple! Destroying simple! Holy crap that's inefficient! Not only did the compiler fail to use my move constructor, but it generated and destroyed two apparently superfluous copies of the lambda during the assignment. So, here finally are the questions: (1) Is all this copying really necessary? (2) Is there some way to coerce the compiler into generating better code? Thanks for reading!

    Read the article

  • "save the changes" message after removing the protection from workbook Excel 2010

    - by abbasi
    Some time ago I protected the Excel 2010 file from the path File Protect workbook Encrypt with password and gave it a password. Now that I removed that password via below method: Open the workbook and use Save As In the lower right of the file window will be "Tools" Choose "General Options" Clear the password. Save over your old file. the file is openable without wanting a password. But the problem is when I open it and close it immediately, even without moving the active cell, the message "Do you want to save the changes you made to 'test.lsx'?" appears. While there hasn't occurred any changes to that file so why I face this message any time I want to close the file? Hasn't the file been corrupted?

    Read the article

  • How do you place an Excel Sheet/Workbook onto a C# .NET Winform?

    - by incognick
    I am trying to create a stand alone application in Visual Studio 2008 C# .Net that will house Excel Workbooks (2007). I am using Office.Interop in order to create the Excel application and I open the workbooks via Workbooks.Open(...). The Interop does not provide any functionality to "move" the workbooks onto a form so I turned to P/Invoke Win32 library. I am able to move the entire excel application onto a WinForm with great success: // pseudo code to give you the idea excel = new Excel.ApplicationClass(); SetParent(excel.Hwnd, form.handle); This allows me to customize the form and control user input. All right click commands and formula editing work properly. Now, the issue I run into is when I want to open two workbooks in two separate forms. I do this by creating two excel application classes and placing each of those in their own form. When I try to reference one workbook to another workbook via =[Book2]Sheet1!A1, for example, it does not update. This is expected as each application is running under its own thread/process. Here are the alternatives I have tried. If you have any suggestions I would be greatly appreciative.(OLE is not an option. VSTO must be available) Create a single application class and move the workbook window into my form. Results: The window moves into my form and displays correctly, however, no right click or left click works on the form and it never gains focus. (I have tried to manually set focus and it does not work either). My guess is, by moving the window outside of the XLDESK application (viewable in Spy++ for Excel Application), the workbook application (EXCEL7) does not receive the correct window messages to gain focus and to behave properly. This leads me to: Move the XLDESK window handle into my form. Results: This allows the workbook to be click-able again but also has an undesired result of moving all child windows into the same form. Create a main excel application that creates workbooks. Create a new excel application for each new window. Move the workbook under the new excel application XLDESK window. Results: This also has the same effect of the 1st option. Unable to click in the workbook. This must mean that the thread that created the workbook is also responsible for the events. Create a windows hook that watches the WndProc procedure. Results: No events watched. The targeted thread must export the hook proc in a DLL export call. Excel does not do this and thus you cannot inject into it's DLL (unless someone can prove me wrong). I am able to watch all threads within my own process but not from an outside process. Excel is created as a separate process. Subclass NativeWindow. Results: Same as #4. After I move the window into my form, all events are captured up until the mouse is directly over the excel sheet making the sheet seem unclickable. One idea I haven't tried yet is just to continually save the excel sheet as the user edits it. This should update all references but I would feel this would cause poor system performance. There will be numerous chart references as well and I'm not sure if this solution would cause problems further down the road. I think in the end, all the workbooks need to be created by the same Excel Application and then moved to get the desired results but I can't seem to find the correct way to move the windows without disabling the user input in the process. Any suggestions?

    Read the article

  • Interactive command to let user change directory in bash

    - by Rich
    I am looking for a CURSES-based way (bash, c, doesn't really matter) of letting a user choose a folder or even a file in roughly the same way that they would do using Midnight Commander. I envisage using up/down for moving the cursor, esc to cancel, and enter to select the item under the cursor. If the item is a file, then return the full path to that file, if the item is a folder, change into that folder. Does anyone know of one that exists? If not, how would I go about writing one? I'm mainly a Java programmer, so I could use JavaCurses, but it feels a bit like overkill.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >