Search Results

Search found 9502 results on 381 pages for 'account'.

Page 320/381 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • I need to preserve a tape using symantec backup exec. I'm aving trouble doing so

    - by MrVimes
    Please forgive me if this is the wrong stack exchange site. Please suggest which one I should post this to if it is. There's an automatic tape machine running in a remote location, with software (symantec backup exec 11d) Recently one of the servers being backed up had problems with its raid controller, so one of the drives has become invisible. I need to preserve the last good backup of that drive so I am trying to replace the tape with the most recent backup of that drive on it with one of the scratch tapes (blank tapes) present in the machine. I've tried the following... Associate the blank media with the media set in question (Wednesday) For the existing media (the tape with the data I want to keep) I click 'move to vault' and move it to the offline vault. I associate it with something other than 'Wednesday' (a media set called 'keep data infinitely...') I then do an inventory on that slot. The above steps I'm led to believe are supposed to put the fresh tape in the slot that had the tape I want to keep in it. But it just keeps showing up as containing the tape I want to keep after the inventory. (after refreshing the device tree) I am a complete newbie with this software. Can you tell me what I'm doing wrong, and/or tell me how to acheive my desired goal Edit: Just want to point out that I did try to get help directly from symantec with this, but having jumped through countless hoops to create an account and create a support ticket my progress was halted by requiring something called a 'tecnical contact id' at the final step with no explanation of what it is or how to get one.

    Read the article

  • Cannot write to directory after taking ownership

    - by jeff charles
    I had a directory on an internal hard-drive that was created in an old Windows 7 install. After re-installing my operating system, when I try to create a new directory inside that directory, I get an Access Denied message. This isn't a protected directory, just a random directory I created at the drive root (that drive was not the C drive in either install). I tried to take ownership by opening folder properties, going to the Security tab, clicking on Advanced, going to Owner tab, clicking on Edit, selecting my user account, checking Replace owner on subcontainers and objects, and clicking Apply. There were no error messages and I closed the dialogs. I rebooted, checked the owner on that folder and a couple subfolders and it appears to be set correctly. I am still getting an Access Denied message however when trying to create a directory in it. I've also tried using attrib -R . to remove any possible readonly attribute inside the directory in an admin command prompt but am still unable to create a directory using a non-admin prompt (it does work in an admin prompt). Is there anything I can do to get write access to that folder and it's subcontents in a non-elevated context without disabling UAC?

    Read the article

  • what is the reason behind window service stopped ,whether its due to LAN problems or any other issues

    - by Steve
    I have a windowservice which named Trunk which stopped one day i just want to know the reason behind it? this is an entry in the logs, Nov 15 17:54:04.318 :Trunk-1516:Trunk:handle_control_event:Received CTRL_LOGOFF_EVENT, ignore it Nov 25 15:54:52.157 :Trunk-1516:Trunk:ERROR - Process Restart Count (5) Exceeded for:C:\Program Files\secon\11.1.4\bin\vmd Nov 25 15:54:52.157 :Trunk-1516:Trunk:Stopping Trunk ... Nov 25 15:54:52.314 :Trunk-1516:Trunk:Shutting down, signaled C:\Program F Nov 20 15:54:20.345 :SCBridge.RegisterBridge:Exception in method: ScUtility.ScCommandException (0xa08990002): Exception from HRESULT: 0xa08990002 Supplemental Information: None available. at ScServer.ScServiceProcessorRegistryManager.Attach(String serviceProcessor, ScClientInformation clientInfo, FORCE_ATTACH_SPEC forceAttachToMaster) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor, Object clientInfo) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor) at ServerControlInterface.SCBridge.RegisterBridge(String SPName) for system APOLLOSP0 attempting to attach and register with the Bridge i had seen service is registered with specific account, so i thought that user logged off from the machine that may be the reason behind it or any LAN disconnection problem . But Having taken another look at the above entry we seem to have a constant failure being generated in vmd which causes Trunk to detect vmd requires a restart. Most of the time it works OK and the restart count is anything up to 4. In this case the Trunk log confirms that the Restart Count is 5 and so is considered to be exceeded. Presumably, this triggers the termination of the other services and Trunk is actually doing its job.So, coould this just be a timing issue and we need to increase the tolerance level (i.e restart count) or do we need to address the 0xa08990002 error in vmd?

    Read the article

  • How could one archive all emails sent from employees?

    - by Schnapple
    My client runs a small business. This business has a small number of employees. They are currently hosted through GoDaddy for web and email. For legal reasons the client would like to archive emails sent by their employees. Currently the emails are all done through POP3 so all the email is basically housed in files on individual machines (remember, small business). It's been proposed an inexpensive solution to this would be to have all emails BCC'd to a main account so that conversations with the outside would could be archived and tracked. I have not investigated it myself personally but apparently GoDaddy can do something along these lines for all incoming email but not for outgoing email. Is there a way to set up email accounts for a particular domain to where a specified admin user could be copied on all outgoing email? UPDATE: I've modified the title to reflect employees not users. The goal of this is to archive sent emails for legal reasons. This is something the employees will be cognizant of and on board with. The bottom line here is to basically emulate a feature of a larger-class platform through a smaller, cheaper platform. If the answer is "can't do it, buy an Exchange license" that's fine. My apologies for phrasing this so poorly. I understand why there was so much confusion.

    Read the article

  • DirectAdmin Centos4 server has virus

    - by Rogier21
    Hello all, I have a problem with a webserver that runs Centos4 with DirectAdmin. Since a few weeks some websites hosted on it are not redirecting on search engines properly, they are redirected to some malware site, resulting in a ban from google. Now I have used 3 virusscanners: ClamAV: Didn't find anything Bitdefender: Found a 2-3 files with JS infection, deleted them AVG: Finds lots of files, but doesn't have the option to clean! The virus that it finds is: JS/Redir JS/Dropper Still the strange thing is: website a (www.aa.com) does not have any infected files (have gone through all the files manually, is a custom PHP app, nothing special) but does still have the same virus. Website b (www.bb.com) does have the infected files as only one. I deleted all these files and suspended the account, but no luck, still the same error. I do get the log entries on the website from the searchengines so the DNS entries are not changed. But now I have gone through the httpd files but cannot find anything. Where can I start looking for this?

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • How to force the start and end date of a task in Microsoft Project to be on the same day?

    - by Hauke P.
    I have a task called "Interview person A about topic X". The task's duration is set to 2 hours. The start date of the task should automatically be calculated taking dependencies and resource availabilities into account. My question boils down to: How can I force this task to start and end on the same date? Background: In my case, Microsoft Project sets the start date to a Friday at 5pm. As my working hours are set to 8am to 12am and 1pm to 6pm (Mon-Fri), Microsoft Project "splits up" the task at 6pm on Friday and plans to continue it at 8am on the following Monday. However, it does not make any sense to stop the interview on a Friday and restart it on Monday. Therefore the automatic suggestion is not helpful in this case. That's why I'm looking for a way way to force the task to start and end on the very same day. (In my example, I'd like Microsoft Project to delay the start date of the task until Monday 8am as this is the first time slot in which the task "fits in completely".) By the way: I have lots of such cases... for that reason it would be really great if there was a solution that doesn't just deal with this single special case.

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • What can I do with a home server?

    - by Joel Coehoorn
    I have an old 700 Mhz Pentium III at home running Windows 2000 Server, with a home router set up to pass incoming requests to it and a DynDNS account set up so it's easy to find. Right now I'm using it for a number of things: Shared folders + backup inside the home network Shared Printer inside the home network Domain Controller, just because I feel like it and because it's useful to me as practice to keep those "enterprise" administration skills. Web Server FTP remote access for my files. I abandoned this for security reasons, but it's still worth leaving visible. Remote Desktop in to the home network (thinking about adding VPN service) SVN repository MySQL - Will be moving to SQL Server 2008 Standard soon. After I upgrade my wife's laptop from home to pro later this year it will also become a domain controller It's the only place I still have access to Internet Explorer 6 any more without setting up a new virtual machine, so I use it for testing code with that browser. The question is: What else could I be doing with this machine? Update Additional ideas based on the suggestions: Media Server/DVR Build server PBX SSH Proxy Server Continuous Integration Server Personal OpenID Provider Update2 Just a note that this server was recently upgraded to an Atom330 with 2 GB ram and bigger hard drive. For all that's slow for a "modern" cpu, it should still be much faster than the old Pentium III and the expected power savings should make the upgrade essentially free over the course of the next year or two. Also, it's now running Windows Server 2008.

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Setting a subdomain to access home machine with windows remote desktop

    - by ianhales
    I'm trying to remotely connect to home machine through Windows Remote Desktop (amongst other things, but this is currently my primary focus). I can do this fine using my home WAN's static IP (thank god for cable!) with port-forwarding, but I would like to access it from a subdomain of my web-site (e.g. home.mydomain.co.uk). In the cPanel for my hosting account, I've gone into DNS zones and altered the A-record to point to my WAN's IP, which I thought should do the job, but I still cannot connect. When I ping the subdomain, I get my web-host's IP, which I guess is to be expected as I believe the DNS of the host domain is used first, then my server handles the redirection of traffic to the IP in the A-record. Is this the correct idea? Do A-record changes suffer from the same propagation delays as DNS record changes, as I suppose that could explain it? (by the way, this thread confirms my thoughts that setting the A-record should be enough: Hostmonster Subdomain redirected to home server IP: How to ssh into home server using subdomain)

    Read the article

  • Need advice in setting up server. fastCGI, suExec, speed, security, etc.

    - by lewisqic
    I am running my own dedicated server with centOS 5 and WHM/cPanel. I would like to configure my server to meet my needs but I need a little help. It will only be my own websites being run on this server. I'm still a little green when it comes to server administration so please forgive my ignorance. What I Would Like to Have: I need some public directories to be writable (for user image uploads and things like that) but I don't want those directories to have 777 permissions. I need individual accounts to have the ability to set custom php settings for their own account without affecting other accounts, whether through a php.ini file or through .htaccess or any other method. I would like things to run as fast as possible, whether that means using a php optimizer or cacher, such as eaccelerator or xcache or anything else. I need things to be as secure as possible. Here Are My Questions What should I use for my php handler? DSO? CGI? fastCGI? suPHP? Other? Should I be using suEXEC? What are the benefits or downfalls of this? What php optimizer/cacher is best to use? Are there any other security tips I need to know about all of this? I'd appreciate any advice or direction that can be offered. Thanks!

    Read the article

  • As an admin, what tools do you use to log what you do to your boxes?

    - by Jerry
    I am more of a linux applications developer than an admin. Over time, I've built servers and maintained them, sometimes to offer services, mostly just to develop the applications I work on. Way back when I would create a file in my account to keep notes on what I did on each machine, so that I could replicate that when I migrated to other machines. Nowadays, I install something a private trac installation, install it's blog plugin, and then use that to make notes of everything I install, and most commands that I run, as well as the output. This provides me a combination wiki and blog that I find very useful as a "captain's log". I do this mostly so that when I migrate to a new clean machine, I have a much easier time in bringing it up. And yet, I am always amazed when I see others just install this, delete that, run this, setup this config, ... without seeming to use any way to actually note what they are doing. What do YOU do, and what tools are available? I am especially interested in the transition between maintaining a few machines for a few people and maintaining several to dozens of machines providing a real service. What are the best practices, and where can I find good resources? Thanks!

    Read the article

  • Changing the Start Menu Power Button - Setting does not work: only (Shut down) is available

    - by Martin
    This is the second time I try to change this setting on a Vista based OS and I can't get it to work again. OS: Windows Server 2002 SP2 (not R2) = Microsoft Windows [Version 6.0.6002] = Vista When I go to: Power Options - Change Plan Settings - Change advanced power settings - Power buttons and lid - Start menu power button - Setting: the available combo box will only show the option Shut down. No other options are available. This server is part of a domain and has not been set up by me. I have not yet talked with the domain admin, but as far as I could tell from googling, only Win7 has group policy options for the start menu. (And yes, OC I will talk to the domain admin to see if he has any clue - which I doubt.) (Edit: I have now talked to our domain admin, and he's got no clue either.) I'm responsible for this server and a local administrator but not a domain administrator. I switched off User Account Control (UAC) yesterday without problems. Since I always log into this machine via RDP and this being a server, the natural choice would be the option (Log out) and not (Shut down). What can I do to fix it or to find out why it cannot be changed? Thanks!

    Read the article

  • Powershell Script Scheduled Task Stopped Running (Could not Start)

    - by Hatsune Yuki
    I'm running a scheduled task (for Powershell Script) on Windows 2003 Server. I believe the script works fine. The task is scheduled to run every 10 minutes from 7:00am to 11:50pm everyday. However, it never gets to run more for than a day. It always stops some time in the afternoon (between 2pm and 6pm). I'm not sure exactly what happened but I always get the error The attempt to log on to the account associated with the task failed, therefore, the task did not run. The specific error is: 0x80070569: Logon failure: the user has not been granted the requested logon type at this computer. Verify that the task's Run-as name and password are valid and try again. It seems like most people with this error are saying that they need to make user "logon as a batch job". However, this option is greyed-out for me. I search for other places where users have similar problems but the solutions are not written in detail (some of them have something to do with GPO). I've only used the basic features of Windows Server and I have no clue how to get to the place they are referring to. Can someone please confirm whether "logon as a batch job" is indeed a solution and provide a detailed walkthrough on how to solve my problem? Thanks. p.s. someone suggested the website http://technet.microsoft.com/en-us/library/cc755659(v=ws.10) I tried to followed the method for web server with domain. However, got stuck on the 6th step where it mentions Group Policy Object. I don't know where it is.

    Read the article

  • How to detect/list rogue computers connected to a WIFI network without access to the Wifi Router interface? [migrated]

    - by JJarava
    This is what I believe to be an interesting challenge :) A relative (that leaves a bit too far to go there in person) is complaining that their WIFI/Internet network performance has gone down abysmally lately. She'd like to know if some of the neighbors are using her wifi network to access the internet but she's not too technically savvy. I know that the best way to prevent issues would be to change the Router password, but it's a bit of a PITA having to re-configure all wifi devices... and if the uninvited guest broke the password once, they can do it again... Her wifi router/internet connection is provided by the telco, and remotely managed so she can log-on to their telco account's page and remotely change the router's Wifi password, but doesn't have access to the router status page/config/etc unless she opts out of the telco's remote support and mainteinance service... So, how could she check if there are guests in the wifi with this restrictions and in the most "point and click way"? In this case I'd probably use nmap to look for other devices in the network, but I'm not sure if that's the easiest way to do it. I'm not a wifi expert, so I don't know if there are any wifi-scanning utils that can tell us who's talking to the router... Lastly, she's a Windows user as I guess that'll influence the choice of tools available Any suggestions more than welcome Regards!

    Read the article

  • Site name questions [closed]

    - by avenas8808
    I'm creating a website on German cars. No issues with it, since it's basically a PHP site, and let's say for the sake of argument, there are no server issues. However, I plan to call it "autohaus" (well, for the sake of this scenario anyway, it's not its actual name) The site is not for commercial gain; it's a fansite. Legally, can I use a name for a website if it's a commonly-used one - "autohaus" appears to be this. There are other sites called "Autohaus" but could someone use a similar name for non-commercial, information/educational use? Basically, what's the copyright law for such a situation? Currently my site is only on localhost for now, but what if I did want to release it to the web? Obviously the legal issues have to be taken into account before the site goes live, but there's no Data Protection Act. etc. since it's not selling anything, and no-one's buying anything, and the pictures in it are being used educationally (but that's for another question entirely)] [NOTE: This question is on trademark/names etc. not domains]

    Read the article

  • Sending same email through two different accounts on different domains using Outlook 2010

    - by bot
    I am a programmer and don't have experience in Outlook configurations. Our company has two email domains namely xyz.com and xyz.biz. Each employee has an email id on one of these domains but not both depending on the project they are working on. The problem we are facing is that when a communication email is sent from the Accounts, HR, Admin, etc departments, they need to send the email twice. Once through the xyz.com account to all employees with an email address on xyz.com and once through xyz.biz to all employees with an email address on xyz.biz. I am not sure why they have to send two separate emails but the IT team has directed all departments to do so as there is no other solution according to them. Even though two different groups have been created, sending an email to employees in a group of xyz.biz from xyz.com does not seem to work. I want to know if Outlook provides a feature such that we can configure some kind of rules to send an email through an id on xyz.com to all users on xyz.com and the same email gets sent automatically to users on xyz.biz through an id on xyz.biz. The only technical details I know is that we are using Exchange 2003 and the IT team claims that this is a limitation causing the problem. Edit: Our company is split into two main divisions depending on the type of projects. I am pretty sure I use domain XYZ wheras the employees in the other division use the doman ABC to log in into the windows machine or outlook itself. Also, employees in domain XYZ can access the machines on the network in domain ABC but not the other way around

    Read the article

  • Exchange 2003 Internet Mail Size Limits

    - by scampbell
    I have unsuccessfully tried to increase per user incoming mail size settings by editing their user account settings on our Exchange server, but large incoming mail from external domains is still blocked using the default global settings. After reading here: http://support.microsoft.com/default.aspx?scid=kb;en-us;322679 I see that All Internet e-mail messages use the global setting for limits on sending and on receiving. The message categorizer evaluates the sender's sending limit and the recipient's receiving limit. In example 2 earlier, a user with a user mailbox limit of 3 MB could receive messages from another user with a 3-MB sending limit. Because Internet users use the global setting, they can send only a 2-MB message. Which to me is madness! Surely if I want to allow a user to receive mail up to a certain size then I should be able to set it as such? Is there a specific way of getting round this? Would setting the global defaults high and setting a lower, say 10MB, limit on the SMTP connector do the trick? Thanks.

    Read the article

  • How do I change the default ftp folder in MacOS X 10.6?

    - by Wild_Eep
    I'm running WordPress 2.9.1 from a Mac running 10.6.3. WordPress is installed to the /Library/WebServer/Documents folder. WordPress has a feature called AutoUpdate. Clicking an autoupdate button will download and install updated versions of the WordPress software, or third-party plugin tools. It's a convenient way to keep things up to date. WordPress uses FTP to download the files. I've enabled FTP and set up a user account and opened the requisite ports in my firewall for FTP traffic. This doesn't seem to be enough for my self-hosted installation, though. I'm sure this feature was originally designed for someone who has access to a remote shared webserver, and that it's merely a configuration challenge related to the FTP setup. I feel that if I can adjust the initial directory that the FTP service presents to the AutoUpdate feature, everything else will work properly. So, my question is, how do I adjust what folder is presented when a given user connects to a Mac running 10.6.3 via FTP?

    Read the article

  • Change to different user, or let different user execute a command

    - by WG-
    I have a problem. There is a server which I can access with an account by ssh, lets say WG. Now there is a folder with the following permissions. drwxr-s---+ 855 vvz www-data 20K Aug 21 17:56 pictures I want to copy this folder using rsync, however since I am not the user www-data but WG I cannot execute rsync. So I want www-data to execute a rsync command. However, I do not posses sudo powers. My friend however tells me that I am actually able to execute the rsync command as www-data, but he will not tell me how. I asked him for some clues and he told me that it had something to do with reverse shell (which I figured out to be that you connect by ssh to your server and then you connect back to your own server, or something). I also asked if it was by-design or actually a flaw in the system. He tells me it is both. Furthermore I think it has something to do with the group permissions. If I just make sure that I am with the group permissions then I can also read the files. Anybody has a clue?

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Dns - wildcard vs. cname subdomains

    - by Matthew
    Alright I have to admit I'm confused with how DNS works. I've always just added things until they worked, and now it's time to learn how they work. So one confusing thing to me is that there's sort of two places I can have records. I have an account with rackspace cloud servers. And then there's the place I registered the domain. But both allow me to edit DNS records. Should I do everything at both places or is one better than the other or am I missing the point? Subdomains confuse me too. I'd like to be able to just have a wildcard subdomain (I've done this in the past.) I just don't like the idea of adding a cname record or A record every time I need a new subdomain. Then I read this and it says: The exact rules for when a wild card will match are specified in RFC 1034, but the rules are neither intuitive nor clearly specified. This has resulted in incompatible implementations and unexpected results when they are used.

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • How can I tell why I have access to a file share on Windows Server

    - by Joel
    I have a file share on a Windows 2008 R2 server in a AD domain (call it \SECURESERVER\STUFF) and I am not sure if I have the share and folder permissions set up right. I noticed the problem when I set up new server (WORKGROUP\FOREIGNSERVER) that was not joined to the domain and tried to copy some files off of \SECURESERVER\STUFF. I was surprised to find that when I tried to access the files, it did not prompt me for a username and password and proceeded to give me full access to the files. That worried me so I tried the same thing on some workstations that were not in the domain and they did NOT have the same behavior (they did prompt for a username/password as desired/expected). So, I think there is something peculiar about FOREIGNSERVER. I am logging into it with a local admin account, but my domain and SECURESERVER should know nothing of this server. I've carefully gone through the share and folder permissions on the share but I can't find the reason that FOREIGNSERVER has access. How can I find out why FOREIGNSERVER has access to SECURESERVER?

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >