Search Results

Search found 20566 results on 823 pages for 'folder structure'.

Page 322/823 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • mshtml.dll latest version for Internet Explorer 8, Windows XP Service Pack 3

    - by AllSolutions
    Many applications in my system (Internet Explorer 8, Yahoo Messenger, Skype 10) are crashing and error details shows module name mshtml.dll. I checked the version of mshtml.dll in system32 folder. It is 8.0.6001.19170. My Internet Explorer version is 8.0.6001.18702. I am not concerned about crash of IE, because I generally use Firefox, but how do I solve the crashes in other applications, which are due to mshtml.dll? I have moved to Windows XP Service Pack 3 (32 bit). I have tried to update Internet Explorer 8 (from Tools-Windows Update), but again it crashes. I can not migrate to IE 9, as it requires Vista or Windows 7. I have applied Cumulative Security update for IE8, which has this file name: IE8-WindowsXP-KB2618444-x86-ENU.exe I could not get much info from Microsoft sites or Google. I do not want to use Automatic Updates feature of Windows. Can somebody give the download links for mshtml.dll and any associated files, which I can replace in system32 folder? Thanks.

    Read the article

  • WS2008 subst in Logon script does not "stick"

    - by Frans
    I have a terminal server environment exclusively with Windows Server 2008. My problem is that I need to "map" a drive letter to each users Temp folder. This is due to a legacy app that requries a separate Temp folder for each user but which does not understand %temp%. So, just add "subst t: %temp%" to the logon script, right? The problem is that, even though the command runs, the subst doesn't "stick" and the user doesn't get a T: drive. Here is what I have tried; The simplest version: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "subst T: %temp%", 2, True That didn't work, so tried this for more debug information: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") Set procEnv = WinShell.Environment("Process") wscript.echo(procEnv("TEMP")) tempDir = procEnv("TEMP") WinShell.Run "subst T: " & tempDir, 3, True This shows me the correct temp path when the user logs in - but still no T: Drive. Decided to resort to brute force and put this in my login script: 'Mapping a temp drive Set WinShell = WScript.CreateObject("WScript.Shell") WinShell.Run "\\domain\sysvol\esl.hosted\scripts\tempdir.cmd", 3, True where \domain\sysvol\esl.hosted\scripts\tempdir.cmd has this content: echo on subst t: %temp% pause When I log in with the above then the command window opens up and I can see the subst command being executed correctly, with the correct path. But still no T: drive. I have tried running all of the above scripts outside of a login script and they always work perfectly - this problem only occurs when doing it from inside a login script. I found a passing reference on an MSFN forum about a similar problem when the user is already logged on to another machine - but I have this problem even without being logged on to another machine. Any suggestion on how to overcome this will be much appreciated.

    Read the article

  • Celery - minimize memory consuption

    - by Andrew
    We have ~300 celeryd processes running under Ubuntu 10.4 64-bit , in idle every process takes ~19mb RES, ~174mb VIRT, thus - it's around 6GB of RAM in idle for all processes. In active state - process takes up to 100mb of RES and ~300mb VIRT Every process uses minidom(xml files are < 500kb, simple structure) and urllib. Quetions is - how can we decrease RAM consuption - at least for idle workers, probably some celery or python options may help? How to determine which part takes most of memory?

    Read the article

  • Pinta crashes on load in Windows 7 x64 Home Premium

    - by DanH
    Unfortunately I can't really give much more information unless anybody can suggest where I should look for logs or dumps etc? Simply put, when I load the program I immediately get the error "Pinta Has Stopped Working". I have tried: Running as administrator Installing to different partitions Installing to folders with no spaces in directory structure I have installed the GTK# for .Net 2.12.9 as required in order to install Pinta. Thanks for any help!

    Read the article

  • Granting read-write rights to my web application on VPS

    - by davykiash
    Am currently testing a bulk CSV import functionality web application and I came across a error The given destination is not writeable My application is zend based and uses the MVC structure application --uploads library --Zend public --index.php What Ubuntu command do I exectute to safely grant the necessary rights to my uploads folder in my web application?

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • Extract a section of a tgz file

    - by TRiG
    I have a 28.5 GB .tgz file which was created on the command line of a Linux computer, compressing one folder and all its many many subfolders. I now want to extract a single sub-sub folder from that .tgz file, using 7zip on Windows Vista. I can't see a way to do it. Opening the .tgz file in 7zip just shows the .tar file inside it. There doesn't seem to be any way to browse that .tar file and extract the section I want. I assume there is a way to do this, but I can't see it. Simply double-clicking on the .tar file brings up a progress bar which runs slowly till my computer complains it's running out of space; I imagine it's trying to extract the whole thing. Searching for "extract section of tgz" and "extract tgz subfolder" and similar found me a way to do it on the Linux command line, but no obvious way to do it on Windows. (Most results found were about extracting into a subfolder, not extracting a subfolder out of the archive.)

    Read the article

  • Set up multiple websites on a local web server

    - by mickburkejnr
    I have spent the last few days setting up a CentOS 6 server on my local network so that I can host multiple projects that I'm currently working on. Everything has been set up so that I access the server by typing 192.168.1.10 and the Apache test page comes up. What I'm aiming to do is to access different projects by typing in 192.168.1.10/project, and then view the project as if it was on it's own standalone server. I have thought about just sticking these sites inside folders on the server then accessing them that way, but a lot of my projects use CakePHP so this isn't feasible. So what I need to do is create VirtualHosts in Apache to allow me to do this, but without using a domain name. I want to stick to using the IP address of the machine (which is static). Any ideas? EDIT I've followed Peter's suggestion, but now I have a new problem. In the httpd.conf file I have entered the following information: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/html/project1 ServerName local.project1.com ErrorLog logs/local.project1.com-error_log CustomLog logs/local.project1.com-access_log common </VirtualHost> And now Apache is saying: Starting httpd: Warning: DocumentRoot [/www/html/project1] does not exist When it clearly does exist. I've disabled SELinux and I can confirm this isn't turned on. I've also checked the ownership of the folder, and its owned by root. I can also save files to these folders using a guest FTP account (which isn't associated to root), so the folders are being listed and can be written to. But when I try the folder in a web browser it doesn't seem to work either. I've also done a reboot of the server and the problem persists. What should I change in order to resolve this?

    Read the article

  • EXCEL workbook, intermitently, takes 30 seconds to load

    - by Julio Nobre
    I am trying to figure out why a simple .XLS EXCEL workbook is taking, randomly, 30 seconds to open. Before answering: Please, bear mind the following: Problem symptoms Hanging is intermitent and it takes exactly 30 seconds; During hanging there is no cpu or disk activity; It only happens during workbook load. Every runs smooth after that; Windows Explorer.exe hangs on folder, but all other folders, system and applications are still responsive; There are no consecutive hangings. I have to wait for while to reproduce this behaviour; All workbooks where located on a local drive (C:\BPI); The workbook has no macros and no addins; Office 2003 is being used for several years; The computer is running Windows XP; Computer has several network mapped drives, all addressed to main file server; Recently, main fileserver was replaced by Windows 2011 SBS Standard Edition What I have done so far I have traced machine Explorer.exe, using Process Monitor, added Duration column, and filtered by Duration 1. That's is how I found that hanging was taking exactly 30 seconds. For further information, please refer to Oliver Salzburg tutorial. Using Process Monitor, I have also figured out than five operations were taking most of sample collecting duration. Looking at sample image below, column Operation below you will notice that one single operation was taking 29 seconds; I have tried different workbooks (all of them smaller than 30 KB); I have, temporarily, removed all shortcuts on User Document's folder that were pointing to network drives or shares; I have runned CCleaner to fix registry issues; I made sure that there were no external links on tested workbooks; I have reproduced this behaviour for hours; I have extensivelly researched for hours on the web; Process Monitor's collected and filtered data

    Read the article

  • Can't open cocoa emacs from terminal using open -a

    - by Shane
    I installed emacs on my macbook air running os x 10.6.5 from this site http://emacsformacosx.com/. I believe this is what used to be called cocoa emacs. I dragged it into my Application folder and it works fine when i run it from there. I want to be able to run it from terminal. After some googling, i tried open -a /Application/Emacs.app foo.txt (foo.txt was and existing file). I got two emacs windows - one with welcome screen and one with foo.txt loaded. I tried a few applications in the /Applications directory and they did not seem to behave like this. I had installed it using my own account (an admin account) so after doing ls -l on /Application I noticed that the owner and group were different from the other entries in this folder. I recursively changed the owner and group to root and wheel, like the others, but this did not help. The only thing that looks funny now is that there that ls -l show a @ character which has something to do with extended attributes but i don't know how to check these. Any suggestions on what to check next? Is using the open command the only to run the program? Can I simulate what it does using a shell script?

    Read the article

  • How to set up a PRIVATE vimwiki on Dropbox.com

    - by Zongheng Yang
    Hi everyone, I assume those who are reading this page know what vimwiki and dropbox.com are and what they are for, so I might directly go into my confusion. The common way of setting a PRIVATE vimwiki on dropbox is simply let your vimwiki directories be under Dropbox folder (but not Dropbox/Public/ because it would be PUBLIC). Dropbox allows directly viewing html with dropbox.com/* url: for example a index.html can be accessed by url https://dl-web.dropbox.com/get/Wiki/html/index.html?w=bfead71a, being added after the file name a specified string, ?w=bfead71a. Hence, if inside index.html there is reference to A.html, which is located in the same folder index.html is in, it has to be accessed using some url like https://dl-web.dropbox.com/get/Wiki/html/index.html?w=SPECIFIED_STRING. But it is seemingly impossible to hack vimwiki in order to make the hrefs in converted htmls corrected in this way. Is there some approach that can resolve this problem? I hope I make myself clear. Had you any questions, please ask me for further explanations. Thank you!

    Read the article

  • Access denied to external USB disk; update access rights fails in Windows 8

    - by gerard
    I use to work with 2 laptops (Windows vista and Windows 7), my work files being on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take Windows 8. I suspect something changed with access rights, as my external disk suffered some "access denied" problem on Windows. I was prompted (by Windows 8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying disk is not ready Additionally, my external usb disk somehow was not recognized anymore. Back to Windows 7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them I could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under Windows 7, all the folder showed with a lock. Then back again to Windows 8. Same problem : access denied to my disk + no way to change access rights as it gets stuck disk is not ready". Now I am pretty sure there is some kind of bug or inconsistency in Windows 8 / Windows 7. I did 2. and 3. a few times. At some point, I also got an access denied in Windows 7. I could restore access rights to the disk to "System" (properties - security - EDIT for full control to group "system". ). But then I still get the same access right pb on Windows 8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. I upgraded Windows8 with the Windows8 updates available. Does not help.

    Read the article

  • DVD Share on Vista Home Premium Failing

    - by hpunyon
    UPDATE: - I can't find any Local Policy Editor for Vista Home Premium, as suggested. - I did learn about registry keys: allocatecdroms, allocatefloppies, allocatedasd and tried adding these keys (individually and collectively) and setting them to both 0 or 1. There was no positive affect on read access to the DVD root folder - always Access Denied. ORIGINAL POST: Failing read access to the root folder of a DVD drive in Vista Home Premium laptop using the Guest account - Access Denied. The client is an XP Home PC that can see, but not access, the data in the share. I'm only trying to read the data DVD - not trying to write/burn anything. On the Vista laptop, I have: All Firewalls and Antivirus disabled.UAC disabled. Password checking disabled. "Advanced Shared" the DVD drive, with "Everyone" having full-access permissions to the share. Tried adding Guest and Anonymous users having full-access permissions to the share. RestrictAnonymous=0 set in the registry. Both PC's are in the same workgroup (MSHOME) The XP Home client sees the shared DVD in \Vista_Hostname\ but when I double click the drive icon on the client, I get a popup that access is denied, check with the administrator, etc. I can share other folders on the Vista PC and see and READ these from the XP Home client. If I enable password checking on the Vista side, I get a user/password popup, and I can authenticate (using my known Vista account, that happens to have Admin rights) and then I can get to see and read the DVD data. I need to open this up so that the (default) Guest user can see and access the DVD data files.

    Read the article

  • Self-connecting printers

    - by Martin Cerny
    Hello, I work as an administrator in a small company using XP Professional on all computers and two servers with Win 2003 Server. Recently a very unusual problam occured one of the computers keeps connecting to all the printers on the network it doesn't matter if it's an administrator or Domain User as soon as somebody logs in the commputer connects all the printers. The printers are either installed on local computers or on the server and shared. There is no log-on script connecting the printers, I install them manualy and none of the other computers shows such behaviour. We have a printer which is installed on two computers and both of them share it (I'm moving it to Server from a small PC which shared it up to now, but some computers still use the old connection), meaning this specific computer connects to one of the printer two times and it can't use either of the connections. How to prevent this self-connecting to all printers (none of the other computers has this problem). If I delte them from the "Printers" folder everything works fine untill I reconnect and the Folder is once again full of all the printers we have. I solved the smaller problem, computer is now capable of printing on all of the printers (it seems there have been some registry issues), after cleaning the registry and reinstalling the printer it seems to work just fine. But the second thing prevails, the computer connects to all the printers in the network (when I remove one/multiple it is reconnected right after the next log-in by any user).

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • best practices for setting up a new windows 2008 R2 server with ec2 AWS

    - by Alex
    Can someone comment what they would add to the following list of SOP in terms of best practices? This is being set up on AWS, and then after further testing, back in our datacenter. Standard Operation Procedure (SOP): Installation Part: 2 - Installation of Software Components in Windows 2008 R2 (Updated). Step: 1 Logon to the host through Remote Desktop. Strp: 2 Open Server Manager - Server Roles - Install Web Server IIS 7.5 with compatible of IIS 6 features and Management compatibility mode. Step: 3 Open IE/Mozilla to Download the below listed software's and save all installation files to folder called "AWS Server Install Files" for future reference.. Net Framework 2.0 (Download that from internet) Crystal reports for .Net Framework 2.0 (x64) (Download that from internet) SQL Server 2005 (AWS Image) Step: 4 Once all software's saved on local drive, then Install it one by one. Step: 5 Navigate to Desktop folder to install the below listed softwares. Microsoft Asp.net 2.0 AjaxExtention 1.0 (placed on Desktop \Softwares) WebEx recorder. (placed on Desktop \Softwares) Winrar(placed on Desktop \Softwares) Step: 6 Make sure all the software are working fine. Step: 7 Inspect the server once entirely. Step: 8 Logoff & Stop the Instance.

    Read the article

  • How to remove NTFS system files from a previous Vista installation

    - by Boldewyn
    I'm trying to shrink my system partition under Win Vista. It's all fine, except that in front of the last 300MB of the volume sits a single file, that cannot be moved by defrag or other means from its position. It's called C:\$Extend\$UsnJrnl:$J, and my assumtion is, that it is left from a previous installation of Vista, when I re-set up the system. Now, googling for this kind of files brings interesting results, but no solution to my problem: Files left on the disk can become ownerless in a new setup of Windows and inaccessible (even for administrators). To be able to access them again, I found the tip to use takeown to re-assign them to the Admin group (or anyone else). Works like a charm for normal files, but not for the C:\$Extend stuff. The C:\$Extend folder is a system folder of the NTFS file system, where the journal is stored (especially in a file called $UsnJrnl:$Data, whose name is surprisingly close to mine). You can delete the journal with fsutil usn /delete C:, however, this doesn't work from within the booted system (as I found out trying). Also, I'm not quite sure of the side effects. You can't move the NTFS own files with standard defrag tools. The same holds, by the way, for not accessible files. Every bit of knowledge out there is targeted to either not accessible files or the $Extend NTFS stuff, but noone addresses my problem involving both, an inaccessible system file. Question: How can I remove this file, or at least how can I move it on the disk?

    Read the article

  • Synchronize folders on different computers without cloud and without network just internet

    - by theimmortalbg
    I have two computers with windows 7, one in my home town and one in another town. So they are not in private network but I have internet on both. They have exactly the same file structure. I am searching for program that can keep the data equal. I know about dropbox or google drive but they are cloud and I don't want to use them. Also they are using folder that you should copy your data in it. There is another programs that are like a server, just put something and after that you can download it but I dont need them. I want just to point which folders to be synchronized and the program make the synchronization. The sync can be in real time if the two computers are powered, or after few time when they are powered. Or it can lock another computer synced folder till update is required. At all this is my documents that I want to be synced in all my computers and to be changed from where I want. In fact I can move the updates with flash but if some program save the changes and make them on another computer with one click it will facilitate my work.

    Read the article

  • make pentaho report for ubuntu 11.04

    - by Hendri
    currently i'm trying to install Pentaho Reports for OpenERP which is refer from https://github.com/WillowIT/Pentaho-...rver/build.xml i ever installed on some laptops which is Windows Based and it's working, but currently i'm trying on UBuntu 11.04 OS, it prompted me error like this "error build.xml:18: failed to create task or type.." below is the steps i did : 1. install java-6-openjdk comment : "apt-get install java-6-openjdk" then i set installed java jdk into java_home environment command: "nano /etc/environment" add environment with this new line : JAVA_HOME="/usr/lib/jvm/java-6-openjdk" I install apache ant command : "apt-get install ant" followed by setting the evnironment command: "nano /etc/environment" add environment with this new line : ANT_HOME="/usr/share/ant" try to check installation with command "ant"... I get message like this: Buildfile: build.xml does not exist! Build failed then download java server from https://github.com/WillowIT/Pentaho-...rver/build.xml and then copied to Ubuntu share folder and then on command form, i goto extracted path which is share folder i mentioned and executed command "ant war " and i got error message : BUILD FAILED /share/java_server/build.xml:18: problem: failed to create task or type antlibrg:apacge.ivy.ant:retrieve cause: The name is undefined. Action:Check the spelling. Action:Check that any custom taks/types have been declared Action:Check that any /declarations have taken place. No types or taks have been defined in this namespace yet This appears to be an antlib declaration. Action:Check that the implementing library exists in one of: -/usr/share/ant/lib -/root/.ant/lib -a directory added on the command line with the -lib argument Total time:0 seconds is there any compability issue? or i miss out some steps? i'm in the some project to rush with for reporting, so please help me to solve this issue i look forward to your corporation to help me solve this issue, thanks a lot in advance Thx Best Regards,

    Read the article

  • can I put files in hidden volume /home at the root level of macintosh HD

    - by mjr
    I am trying to reproduce the file structure of my VPS on my mac locally, so that it's easier for me to test websites in a local development environment to do this I would need have a /home folder at the root level of the hard drive using panic transmit I can see that there is already a volume called home at the root level can I store other files and folders in here to set up my local web server? sorry if this is a dumb question folks

    Read the article

  • Exchange 2003 -- Mailbox Management not deleting ALL messages aged 30 days or older...

    - by tcv
    I've recently created a Mailbox Management task within Exchange 2003 that, every night, looks at the contents of the Deleted Items within a particular mailbox and deletes mail that's 30 days or older. The scheduled task ran on its own last night and I have confirmed that messages within the right mailbox and the right folder were, in fact, processed. Many mails were deleted ... but not never email older than 30 days. In fact, the choice seems kinda random. Last night 3/10/2010 was the 30 day watermark. Mails were deleted from 3/10/2010, sure enough, but not all of them. Mails older than 3/10/2010 were deleted as well, but, again, not all of them. The only criteria I have on the management -- aside from the single mailbox and single folder scopes -- is the age criteria. The size criteria is set to Any, meaning I don't care about the size. I care about the age. It's made me wonder where there is some sort of limit on how many mails can be processed? The schedule is set for 12am and 1am every night. Any hints appreciated.

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system? (To move all files to a

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it) The XP machine runs XP home and hence has "simple file shairing" turn on. So it seems that even if I create a admin account (with password) and connect with that account, it still comes in as "guest" on the XP machine. Chooseing to share the folder I want access to rather then the top of the disk drive seems to work, but is a pain as I need to share each user's folder with a different share name. If the new computer was not a laptop, I would just plug the hard disk from the old machine into it, but being a laptop I don't have that option.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >