Search Results

Search found 14311 results on 573 pages for 'stan note'.

Page 381/573 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by Ram
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide.Following directives were added to httpd.conf. AddTyoe application/x-httpd-php LoadModule php5_module /usr/lib/httpd/modules/libphp5.so <FilesMatch "\.php$"> SetHandler application/x-httpd-php </FilesMatch> File locations are: docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin File privileges are: [root@linuxdev1 html]# ls -Z -rwxr-xr-x root root index.php drwxr-xr-x root root phpMyAdmin -rw-r--r-- root root phpMyAdmin-3.4.3.2-english.tar.gz drwxr-xr-x root root test1 Any help is appreciated.

    Read the article

  • Microsoft Deployment Toolkit 2012 Error

    - by Jacob Schaer
    I just started with MDT2012 recently in hopes of eventually getting away with using Ghost to deploy all of our department computers. When I test deploy in VirtualBox, it deploys the OS properly, but stops because of a network driver failure (it gets the "could not allocate resources" issue). On physical hardware (Latitude E6500, Optiplex 980, and an older Latitude) it gets through the multicast and stops immediately after with: "Setup was unable to create a new system partition or locate an existing system partition. See the Setup log files for more information" I've looked at the logs and never see anything really of note. Originally I was using DriverPacks from DriverPacks.net, but thinking it was a driver issue, I switched over to using Dell's cab driver packs. Still the same issue. I check and it did the HDD is all fine - it was properly partitioned, set to bootable, and was loaded with all the proper OS installer files. I'm using a flash drive to do the install - when I make changes to the deployment share I rebuild and copy the ISO to the drive, then use YUMI multiboot to start the ISO (probably irrelevant).

    Read the article

  • SQL - an error occurred during the pre-login handshake

    - by Rivka
    Until yesterday evening, I was able to connect to my server from my local machine. Now, I get the following error: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.) (.Net SqlClient Data Provider) Note, I can log on to the actual server with no problem. Yesterday, I installed IIS on my machine and set up a site using my IP address - don't know if this has anything to do. I did come across this article, followed the steps, but didn't seem to help. http://www.escapekeys.com/blog/index.cfm/2011/1/26/Microsoft-SQL-Server-Error-64-A-connection-was-successfully-established-with-the-server I also went through the following article, changed TC/IP settings, restarted, but nothing. http://blog.sqlauthority.com/2009/05/21/sql-server-fix-error-provider-named-pipes-provider-error-40-could-not-open-a-connection-to-sql-server-microsoft-sql-server-error/ Started trying suggestions from comments too but stopped when I realized I might be messing things up more. So, why is this happening / how can I fix?

    Read the article

  • How to do a Windows 7 Image restore to an external drive?

    - by Vaccano
    I have a system that I have done a Windows 7 Image restore on. I would like to migrate that image to a different hard drive. Is there a way to restore the image to an externally connected hard drive? For example: I have 3 hard drives: The first in the source machine (the one I want to copy). The second in a machine that I want to do the work. And the third is not in a machine. It is the target that I want to overwrite with the contents of the first. I boot up a 2nd machine and connect the 3rd hard drive externally (using some cool cables I have). I then use some cool feature of Windows 7 to replace what is on the 3rd hard drive with the windows 7 image of my 1st machine (that is on on my networked backup server). I need to know what the above mentioned "cool feature of windows 7" is, if there is one. And how to use it. Any ideas? Note: that I very much so don't want it to overwrite what is on the 2nd machine/hard drive.

    Read the article

  • How can I duplicate HBCD's XP boot loader with my MBR?

    - by Warpstone
    I'm stumped. I'm migrating a Win XP Lenovo T500 to an SSD: I copied the XP partition using EaseUS to the SSD. Aligned the boot sector using Gparted The MBR needs to be rebuilt (fair enough) However, all attempts to use the Windows Recovery console hang (both via a boot CD and even when the console was installed as a boot option). I've tried using a bunch of tools to rebuild/replace the MBR, but no dice. They all say the MBR has been fixed, but I cannot load Windows from the SSD. The HBCD's boot from windows option works just fine however. I'm confused as to what HBCD can do that my drive can't. How can I get that functionality on my SSD? Is it a MBR fix I can mirror? The SSD is extremely fast when I do use HBCD to boot up... but it would be nice to not need a token-based access to the machine! :) Note: I know, windows 7 may be worth a fresh install, but I'm trying to avoid the cost and hassle if possible.

    Read the article

  • virtualized windows 2003 domain with CentOS 5.3 and poor connectivity

    - by Chris Gow
    I have a test lab set up running a virtualized windows 2003 domain on a CentOS 5.3(xen) host and am experiencing connectivity problems with guests running on other hosts that are part of the same domain. Here's the setup: On Computer A I have CentOS 5.3 running as the host and have virtualized windows 2003 servers for a primary domain controller, a backup domain controller and an exchange server. The primary domain controller also acts as a WINS and dns server. The windows domain appears on a separate subnet from my company's corporate network. Connectivity to any of the virtualized guests on Computer A is fine (remote desktop, ping, what have you). I have another host computer (Computer B) that also has a virtualized Windows 2003 server guest that is part of the same domain. However, connectivity to that guest is flaky at best. I continuously get at least 60% packet loss when I try to ping the guest, and due to that flakiness I can not access any of the services that it runs (remote desktop, web). Now here's the interesting part. It seems to affect only machines running on a different computer than the domain controller that are in the same domain. On Computer B there is another Windows 2003 guest that is not part of the test domain and is on my corporate network. There's no connectivity issues with that guest machine. The problem does not seem to be specific to Computer B either. I created a test VM on my local computer within the test domain and it exhibits the same behaviour as the guest in Computer B. A couple of items to note: - Host OS on both Computer A and B are the same CentOS 5.3 64 bit - Guest OS is Windows 2003 64 bit and 32 bit (the guest on Computer B is 32 bit) - Guest OSes are all up to date (as of Monday) - Host OS on Computer A was upgraded from CentOS 5.2 to 5.3 Update: Sorry I did not follow up with the comments from below. Computer A and B have been moved to their own dedicated switch and the problem has gone away. I'm not sure what the underlying problem(s) were though

    Read the article

  • Excel table auto update

    - by Mike
    So I have a table in Excel with formulas. When I add a new row, the new row automatically fills in the formulas as well, which is great. My problem is that it also changes the formula in the row above the added row as well. Here's what happens specifically: My table's last row is row 24. A formula I have in that row is the following: =COUNTIF(C$11:C24,"y")/(COUNTIF(C$11:C24,"Y")+COUNTIF(C$11:C24,"N")) When I add in data in row 25 the formula is updated in row 25 as well, which is what I want, to the following: =COUNTIF(C$11:C25,"y")/(COUNTIF(C$11:C25,"Y")+COUNTIF(C$11:C25,"N")) My problem is that the row above also updates - my row 24 changes to the same as row 25 (the C24 goes to C25). Why is my row 24 formula changing when I add a row 25? Note, my formulas above row 24 stay the same when I add in row 25 - only row 24 changes when I add in 25. Is there a way to not update the row above the row being added? This problem continues when additional rows are added - If I add in a row 26, then the formula in rows 24-26 then all reference C26. Why are they all updating?

    Read the article

  • Every month, scheduled task fails and password must be reset - why?

    - by Ducain
    [NOTE: I posted this originally at StackOverflow but it got no traction there - reposting here.] We have a bit of software installed at a few client locations that runs (via Windows task scheduler) a few times each day. In ONLY ONE of the client locations, we have a unique problem: each month, the task will stop working, after running every day for weeks. Twice now it's failed on the 2nd of the month. When I walk the client through troubleshooting it, we've found that it can't start - access denied. To fix it, we simply re-enter the same exact password, and then off it goes happy as a clam. I've never heard of this issue, and their IT people say they don't have anything running once a month that might cause that. I'm at a complete loss here. Any ideas as to why this might be happening? Further details: Windows XP pro machine. Task is being fired with credentials from a local admin account. Computer is always on, and connected to the net.

    Read the article

  • Separate Certificate by Subdomain (With multiple IPs)

    - by Brian
    Note: Yes, I realize this problem is easier to solve by just using 1 multi-domain or wildcard certificate. I wish to have an ASP.NET site running on IIS with 2 SSL domains sharing 1 web application but using separate certificates. Assuming I have 2 certificates, this can be solved on IIS7 as follows: Web Application1: Binding 1: http, 80, IP Address *, Host Name * Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Binding 3: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 ip addresses, but both mapped to the same web application. In IIS6, the closest I have been able to come to this configuration is: Web Application1: Binding 1: http, 80, IPADDRESS1 Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Web Application2: Binding 1: http, 80, IPADDRESS2 Binding 2: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 IP addresses, 2 web applications, both mapped to the same file location. The IIS6 solution is not optimal. Even if sharing an application pool, there are still costs associated with running the same site as two applications. Is upgrading from IIS6 to IIS7 a legitimate way to resolve this problem? Is there an IIS6 way to map 2 IP addresses within the same web application to different certificates?

    Read the article

  • F2 in Paste mode

    - by dotancohen
    Vim has a terrific paste mode, in which abbreviations and mappings are ignored. Frustratingly, even keys that do not map to pastable ASCII characters, such as the function keys, are pasted literally. For instance the key F2 is pasted as <F2>. Is there anyway around this? Note that pastetoggle can in fact be mapped to a function key to easily leave paste mode, however the function that I am writing changes other values when entering or leaving paste mode (such as enabling or disabling line numbers and other things). Therefore I would really like to find a workaround. For reference, here is the current version of the function (that gets stuck in paste mode): iab if if<Space>(<Space>{{<Esc>kA<Left><Left><Left><Left><C-R>=Eatchar('\s')<CR> " Triple-toggle Insert Modes: coding, prose, and paste let g:insertModeGlobal=1 function! Te() if g:insertModeGlobal==3 " Was in paste insert mode, go to coding insert mode set nu set nopaste let g:insertModeGlobal=4 endif if g:insertModeGlobal==2 " Was in prose insert mode, go to paste insert mode set nolinebreak nnoremap j j nnoremap k k nnoremap gj gj nnoremap gk gk set relativenumber execute ":Signs" iab if if<Space>(<Space>{{<Esc>kA<Left><Left><Left><Left><C-R>=Eatchar('\s')<CR> set nonu set paste let g:insertModeGlobal=3 endif if g:insertModeGlobal==1 " Was in coding insert mode, go to prose insert mode set linebreak nnoremap j gj nnoremap k gk nnoremap gj j nnoremap gk k set number execute ":DisableSigns" iab if if let g:insertModeGlobal=2 endif if g:insertModeGlobal==4 let g:insertModeGlobal=1 endif endfunction

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Allowing Sharepoint to relay email through Exchange

    - by dunxd
    I have written a Sharepoint 2007 web part that sends a field from a form to a specified email address. I have got the form working as I require, but at present it can only send to internal email addresses. Sharepoint's email functions use SMTP to send to our Exchange 2003 server, but because our Exchange server is configured to prevent relaying, if the To: address is not at a local domain, it won't deliver the mail. I don't want to open up our Exchange server to be a completely open relay. What I want is to allow my Sharepoint servers to send mail to addresses outside our domain. The following seem possible: Allow all mail sent from one of the Sharepoint servers to be relayed Allow all mail from a web application pool account to be relayed (I am not sure that the application pool authenticates to the SMTP server though) A combination of the two Can anyone advise on the best way of doing this? Is setting up a dedicated SMTP server on the Exchange server (not a separate physical server) the right way of going about this? EDIT: Note this is for Exchange 2003. There is a post on setting this up in Exchange 2007 which appears to have recognised the frequent requirement to do what I need. It doesn't give much detail on 2003 though. Can anyone expand?

    Read the article

  • How do I connect remotely to SQL Server from Windows client?

    - by humble_coder
    Hi All, Having a bit of an issue connecting to SQL SERVER remotely from Windows. I've verified that all of my settings are correct via SQL SERVER MANAGEMENT STUDIO EXPRESS and SQL SERVER CONFIGURATION MANAGER. I can connect remotely using ODBC drivers from other OSes (e.g. OS X, Linux, etc). However, when I connect with the same credentials from a remote Windows machine using "SQL SERVER" as the driver I am told that the system cannot connect. I've tried creating an ODBC Data Source and I get the same error: Connection failed: SQLState: '01000' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen(InvalidInstance()). Connection failed: SQLState: '08001' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]Invalid Connection From the non-windows machines I can use the IP address of the SQL Server just fine. However, on the remote Windows machine, neither IP address nor named instance works. FYI, I can create an ODBC Data Source using the named instance on the machine actually running the SQL Server (but this is, of course, nothing special -- just proof that it isn't completely hosed). One interesting note: If I use SQL STUDIO 2005 from a Windows client machine, I can use the IP address to connect remotely. Still, the whole reason I bring this up is because I need to use a software package I've written to connect to SQL Server remotely from Windows machines as well. Previously the solution was only needed to xfer data from SQL Server into a PostGRES or MySQL database on non-Windows machines (due to DBA preference). However, now they also want to move the data from the legacy software to MySQL even on Windows. Any assistance would be most appreciated. Feel free to provide a full example connection string. Best

    Read the article

  • ssh Password-less login to multiple machines when you already have one

    - by tandu
    I'm a little bit confused about setting up a password-less login for multiple machines to begin with, but I think I could do it from scratch. The problem is I already have it set up for one machine and I don't want that to be blown away when I try to set it up for the other machine. Let's clarify: Machine A: the machine I'm connecting from Machine B: the machine I'm connecting to. Password required Machine C: the machine I'm connecting to. Password-less ssh I have read some tutorials on setting up password-less ssh to a certain site, but they usually start with "move id_rsa out of the way so it doesn't get blown away," but then at the end of the tutorial it's not moved back. If I had no help at all, here is what I would do: Log into B ssh-keygen -t rsa -f ~/id_rsa.other scp id_rsa.other.pub A:~/.ssh echo "Host A \n Identity File ~/.ssh/id_rsa.other" > ~/.ssh/config (Note that I realize these commands may not be exactly correct, but this is just the idea). What I'm not quite clear on is if I need to update the config for A, B, or both. I'm fairly certain to do a password-less login from A to B, it is A that needs the public key .. but I also suppose I need B to use the correct id_rsa file for that public key. Finally, I don't want the password-less login for C to be affected at all .. it's using id_rsa. Am I going wrong anywhere?

    Read the article

  • optimal folder structure for storing 100k files on a USB drive

    - by cherouvim
    I need to store 100k files (around 40GB) in a USB drive. Each file has a unique int id (e.g 45000). Option one is to put all files in a single folder: root/ root/1.pdf root/2.pdf root/3.pdf ... root/567.pdf root/568.pdf root/569.pdf ... root/10001.pdf root/10002.pdf root/10003.pdf ... root/99998.pdf root/99999.pdf root/100000.pdf Option two is to create a [1-9][0-9]* folder hierarchy based on that id: root/ root/1/file.pdf root/2/file.pdf root/3/file.pdf ... root/5/6/7/file.pdf root/5/6/8/file.pdf root/5/6/9/file.pdf ... root/1/0/0/0/1/file.pdf root/1/0/0/0/2/file.pdf root/1/0/0/0/3/file.pdf ... root/9/9/9/9/8/file.pdf root/9/9/9/9/9/file.pdf root/1/0/0/0/0/0/file.pdf Which option will scale better? I can understand that the second option will require tons of folders but each folder will at most contain 10 folders and 1 file. Maintenance will not be an issue since everything will be controlled by an application. Note that this is a USB drive on linux and based on the above I'd also like to know whether I should go with FAT32 or NTFS.

    Read the article

  • REMOTE_USER not getting set?

    - by landed
    I am trying to setup LDAP Authentication in Joomla using a plugin called JMapMyLDAP (in fact 4 plugins each doing a different job). I need to pull a part of a string out of the server variable REMOTE_USER and this should be visible (we see here http://timplummer.com.au/4-how-to-integrate-joomla-3-with-active-directory-using-ldap.html) in phpinfo(); The issue is that REMOTE_USER is not set or at least not appearing. A few things to note (if you don't mind) here- conceptually I am not really understanding authentication as a whole subject it appears to be vast despite my years working in websites. Yes I used asp and built php pages to check a user is who they say they are with a token(/session?) that was given to just them and then they are identified when a stateless request is made to the server. Thats my level of understanding. This sounds different to the basic authentication in apache where a password sits in a file and a username and the user needs to login to a basic form to get access to the folder/docs this is via an .htaccess file. Ok so with the LDAP to work I need to get REMOTE_USER this sounds very reasonable as how else do we know is making the request. Thank you.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Can I configure Thunderbird 3 to refresh the folder list for an Exchange IMAP account?

    - by Howiecamp
    Background: When used as an IMAP client against Gmail, Thunderbird 3 (may be the case in v2 also, not sure) will refresh it's list of folders (the folders correspond to Gmail labels) when you do "Download/Sync Now..." or restart the Thunderbird client. Any new folders (labels) created in Gmail will sync to the client and any folders moved/changed/deleted folders in Gmail will move/change/delete on the client as well. (Note: Thunderbird has the concept of "subscribing" to IMAP folders (assumingly allowing you to determine which folders you want, rather than bringing all of them down and dragging loads of data across the wire). When used against Gmail, Thunderbird appears to automatically subscribe to all folders (including when folders are newly created in Gmail), so this might be why the refresh is happening properly.) This behavior is what I want with Exchange. When using Thunderbird with Exchange (2007), the folder list doesn't refresh when folders are added/changed/deleted on the server and/or from a different mail client. When I look at the subscription options, some are checked and some are not (not sure why Thunderbird picked some and not others). And when I add new folders on the server and/or from another client, they never even appear in Thunderbird's list of folders, preventing me from subscribing to them.

    Read the article

  • How to set up Windows 7 Professional as a NAS

    - by Enyalius
    I searched and didn't find any answers, so please forgive me if this is a repeat. Anyway, I have an older computer that I'm using as an HTPC, and I was hoping that I could use it as a NAS/multimedia server, as well. My primary uses would include accessing content on my PS3 (same LAN), accessing content from other computers on my home network and (if I can) accessing content from my Android phone over the internet. I have used SubSonic to stream music to my Android phone and other computers before, but I would really like to find a way to do this natively if possible. I know that I can buy external hard disk cases that can plug in the USB port of my router, that I can get a Drobo or other network storage solution, but I would really just rather not spend the money (especially considering that I already have a computer that I should be able to use). Hardware involved: Apple AirPort Extreme base station router (most recent revision) Home Theater Personal Computer: Core 2 Duo @ 2.4GHz, 8GB DDR2 RAM, ~3.5TB hard drive space Sony Playstaiton 3 Thin 120GB HTC Thunderbolt (I have 4G coverage) rooted and running Android 2.2.1 Various Apple laptops Various Windows 7 desktops/laptops Thanks in advance! Note- I have looked at open source NAS software but I would like to preserve the Windows Media Center functionality in Windows 7, so other NAS software is not an option for me currently. .

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • Cloning a failing disk (Win 7)

    - by daveh551
    I have a Windows 7 machine with several partitions on a 1.5T drive. Windows has been complaining about disk errors and imminent failure, so I have purchased a new 2TB drive. The failing disk has not completely failed, and, in fact, I was able to boot Windows from it (after a couple tries) and examine the SMART logs - the only RED item was 1 sector being reallocated. But when I try to Clone it to the new Drive using Acronis True Image Home (2010), True Image can see the drive, the partitions, and the contents, but when it goes to actually do the clone, it says "Failed to move. Make sure the destination disk is not smaller than the source disk, and that there are not errors on the disk" (or something like that). What are some other options for simply cloning the failing drive. I'd like to clone the entire disk, but am willing to do it partition by partition if necessary. Was this a known failing of the 2010 edition of ATI, or is it really something hosed in my system. Would upgrading to the 2012 edition be likely to work any better? (I'd download the trial and try it out, but if I remember right, the cloning operation is disabled in the trial version), and I don't have enough free disk space to make an entire image.) What are some other cloning software packages if ATI won't work? Note that I'm only looking to clone the disk, not make an image as a back up - I use Ghost for that, and can fall back to that if I have to. It looks to me like CloneZilla would do the job. Any recommendations? Thanks, and if this duplicates other questions, I apologize.

    Read the article

  • When to raise domain functional level?

    - by Joel Coel
    We very recently completed a project to retire two old domain controllers running Server 2003 R2. They are now replaced with shiny new 2008 R2 boxes. However, the functional level of the domain has not yet been updated for the 2008 R2 servers, just in the long-shot case of the need for a rollback to the old controllers. I expect to have the all clear to update the domain by next weekend. I also want to note that our desktop clients are still 95% Windows XP. However, we're about to start a project to update our 200 or so clients to Windows 7 before the end of the calendar year. Is there any advantage to holding the domain at the 2003 functional level while we are still supporting more Windows XP than Windows 7, especially given that some of the management stations are still XP? Update: I forgot to mention earlier that we still have a pair of windows 2000 servers (not domain controllers) that support some legacy software. I'm working to replace those, but in the meantime I need to be sure that Windows 2000 can still participate in a 2008 R2 domain.

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >