Search Results

Search found 7211 results on 289 pages for 'enable'.

Page 217/289 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Overscan in Catalyst Control Center

    - by alex
    I've connected my PC to my TV via HDMI (I've also tried using DVI). Whatever the resolution (anything from 800 x 600 to 1080p), the image displayed on the TV does not fill the entire screen surface; there's a black border on the edge of the image. This happens in both Vista and Windows 7. After a bit of searching I came to the conclusion it's got something to do with overscan done by the graphics card (ATI 3200HD). It's definitely not the TV because on my old PC there were no such problems. I've searched for the option in Catalyst Control Center 9.9 but it's not available for the TV; if I connect a normal LCD display, the option is there, but only for the LCD screen. If I choose Configure for my TV, where the overscan option should be, it takes me to the Welcome tab in CCC. How can I make the image fill the entire screen? How can I enable/disable overscan for my TV?

    Read the article

  • Why are certain folders in my XP network share really, really slow?

    - by bikefixxer
    I have a workgroup set up with Windows XP. My file "server" is running XP Pro and the clients are running XP home. I've turned simple file sharing off on the server because certain clients need access to certain folders and not to others, and I want to keep it that way. Therefore, I've used the granular sharing/security settings to enable certain clients access to certain folders. I'm using the net use command in a batch file on the clients to add the share when they logon so it's always available via a mapped drive or a shortcut. On some clients "My Documents" points to the mapped drive, but all of the local and application settings stay local. Everything works well except for accessing a certain folder on the network. It contains a lot of random batch files and self-executable programs I use for diagnostics and what not, and nearly every time I open the folder the computer hangs for 15-60 seconds. This happens on every machine, including the server (but not nearly as often as the clients). I've searched high and low and cannot figure it out and it's driving me crazy. Here are all the things I've tried to no avail: Disabled firewall (XP) and anti-virus (ESET NOD32) Deleted any desktop.ini file I can find in the share Disabled "automatically search for network folders and printers" Disabled "remember each folder's view settings" Set HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer NoRecentDocsNetHood = 1 Tried with mapped drives and with UNC shortcuts Ran CHKDSK Removed Read-Only attribute from all folders (well, tried to remove, it always came back on with a half check) Added the server's static IP to the hosts file on the clients I've tried monitoring the server's performance to see if anything makes sense. Occasionally the issue coincides with a spike in pages/sec (memory) but not always. Other than that, everything else seems normal. The anti-virus would seem to be the most likely cause to me considering the batch files and what not, but it still hangs when it is completely disabled. I'm at a loss and if anyone can help me with this I'd greatly appreciate it!

    Read the article

  • Word 2010 does not save as Word 2003 XML

    - by Peter
    I have a document which was created in Word 2010, but for use in a particular application, it needs to be saved in Word 2003 XML format. When I try the normal "Save as" via the File menu (choosing Word 2003 XML format to save as), Word 2010 thinks for a while, and then presents the "Save as" dialog to me again, suggesting that I save the document as .docx. Trying to get around this, I saved the document as .doc (i.e. Word 97-2003 document). This worked fine. But when I try to save this .doc file as Word 2003 XML, again Word 2010 thinks for a while, and then presents the "Save as" dialog, suggesting this time that I save the document as .doc. Oh, and I need to say that this only happens on a specific document - all others work fine. I know I should try a process of elimination and see what is causing the symptoms, but it would nice to have an answer "in principle". Is there perhaps a setting somewhere that I have enable? Does anyone know what's going on here?

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • Terminal emulation has stopped working. Garbage escape chars

    - by oligofren
    To enable me to do some remote administration of our servers I started using a terminal emulation program called TouchTerm Pro on my iPhone. While not the smoothest experience, it has allowed me to leave my computer behind when going out of town, which makes the slightly painful experience worthwhile. As of late, the app unfortunately no longer works. Pressing up and down keys after logging on via ssh gives me garbage like ^[[A and ^[[B. Combinations with Ctrl - like you can see in the video - no longer works either. Writing full command lines and executing by the enter key works though. Being able to search my bash history was the difference between a usable app and endless frustration, so getting it to work is essential. The app has (of course) met its end of life, not getting updated anymore. I am not quite sure, which side (client or server) that has to be "fixed"/hacked to make the control sequences work again. But is there something I can do to make it work as intended? You can see a video of TouchTerm in operation here.

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • Using Samba to share a folder from a Linux guest with a Windows host in VirtualBox

    - by AmV
    I would like to share a folder from a Linux Guest with a Windows host (with read and write access if possible) in VirtualBox. I read in these two links: here and here that it's possible to do this using Samba, but I am a little bit lost and I need more information on how to proceed. So far, I managed to set up two network adapters (one NAT and one host-only) and install Samba on the Linux guest, but now I have the following questions: What do I need to type in samba.conf to share a folder from the Linux guest? (the tutorial provided in one of the links above only explains how to share home directories) Are there any Samba commands that need to be executed on the guest to enable sharing? How do I make sure that these folders are only available to the host OS and not on the Internet? Once the Linux guest is setup, how do I access each of the individual shared folders from the Windows host? I read that I need to mount a drive on Windows to do this, but do I use Samba logins, or Linux logins, also do I use localhost? or do I need to set up an IP for this? Thanks!

    Read the article

  • Why can't I mount an image hosted on a read-only HFS+ partition via Boot Camp?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Postfix + Exchange + ActiveDirectory; How to mix them

    - by itwb
    My client has got many sub-offices, and one head office. The headoffice has a domain name: business.com All users in the many sub-offices need to have a headoffice email address: [email protected] Anyone not in the head office will need the email forwarded to an external email address. All users in the head office will have their email delivered to Microsoft Exchange. Users are listed in Active Directory under two different OU's: HeadOffice or SubOffice. Is this something able to be configured? I've done some googling, but I can't find any examples or businesses set up this way. Edit: Postfix will accept all email, will need to determine to forward the email to an external account or alternatively have it delivered to MS Exchange. I've done some reading about MS Exchange and that you can 'mail-enable' contacts for forwarding - but I don't know if each AD account requires an Exchange CAL? The end goal is to forward email to external accounts to sub offices or accept email for head office. Maybe I don't need to worry about Postfix to perform this task..... http://www.windowsitpro.com/article/exchange-server-2010/exchange-server-licensing-some-of-your-questions-answered "What about client access licenses (CALs)? You need one CAL per user who will connect to Exchange. Although it might not be 100 percent precise, I prefer to think of it as one CAL per mailbox; there are exceptions for users outside your organization, automated tools that use mailboxes, and so on. Exchange doesn't enforce this limit, so it's on you to ensure that you have the correct number of CALs for the set of clients you support."

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Configuring MySQL for Power Failure

    - by Farrukh Arshad
    I have absolutely no experience with databases and MySql. Now the problem is I have an embedded device running a MySQL database with a web based application. The problem is when I shutdown my embedded device it just cut off the power, and I can not have a controlled shutdown. Given this situation how can I configure MySql to prevent it from failures and in case of a failure, I should have maximum support to recover my database. While searching this, I came across InnoDB Engine as well as some configuration options to set like sync_binlog=1 & innodb_flush_log_at_trx_commit=1. I have noticed my default Engine is InnoDB and binary logs are also enabled. What are other configurations to make for best possible failure & recovery support. Updated: I will be using InnoDB engine which supports Transactions. My question is how best I can configure it (InnoDB + MySQL) so that it can provide best possible fail-safe as well as crash recovery mechanism. One configuration option I came across is to enable binary logging which InnoDB uses at the time of recovery. Regards, Farrukh Arshad

    Read the article

  • Apache Server Status page in port 8443

    - by batman
    I'm very new to apache. I tried to enable the server status page of apache. I added the status.conf and status.load to mods-enabled directory. I changed the config of apache2.conf to include all mods-enabled directory. This is the config of staus.conf: <IfModule mod_status.c> # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Uncomment and change the "192.0.2.0/24" to allow access from other hosts. # <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 ::1 # Allow from 192.0.2.0/24 </Location> # Keep track of extended status information for each request ExtendedStatus On # Determine if mod_status displays the first 63 characters of a request or # the last 63, assuming the request itself is greater than 63 chars. # Default: Off #SeeRequestTail On <IfModule mod_proxy.c> # Show Proxy LoadBalancer status in mod_status ProxyStatus On </IfModule> </IfModule> The default settings. I restarted my server. I'm redirecting all ports to 8443. Which in turn turns my requests to localhost:8443/server-status. Which does throw an 404 error. Are there any way to get around this? Thanks in advance.

    Read the article

  • Unable to connect to server after a certain amount of time

    - by Troy
    I am a business FIOS subscriber with 5 static IPs. I have the following network setup: Verizon provided ONT Dlink switch Dell server running Ubuntu 12.04 with iptables enabled and a static IP address. The makes/models of hardware are: FIOS ONT Alcatel-Lucent I-211M-H ONT D-Link D-Link Web Smart Switch DES-1228P Server Dell Optiplex 755 (Ubuntu 12.04 Server) I have iptables running on the server with http, https and ssh ports open. I can connect to a website on the server from an external computer, but after a certain amount of time (mins to hours), I can no longer connect. All I have to do to re-enable connectivity is connect to the server via SSH from a computer INSIDE the network. I don't have to actually login, I just have to establish a connection. I can then access the website externally again. I did some googling and it seems some of verizon's equipment had an ARP bug where the ARP entries would expire after a certain time period, but those issues all seem to be from back in 2009 - 2010. I know the switch has an 'auto learning Mac address' feature, but I'm not sure if that could be the problem or not. Does anyone have any ideas or advice on how I can troubleshoot this? Thanks!

    Read the article

  • How do you apply proxy settings per computer instead of per user?

    - by Oliver Salzburg
    So far, I've used a user group policy object utilizing Internet Explorer maintenance to set a proxy for the user in IE. We have now deployed the Enterprise Client (EC) starter group policy to our domain and this policy affects this behavior. The EC group policy uses the policy Make proxy settings per-machine (rather than per-user). This policy describes itself as: This policy is intended to ensure that proxy settings apply uniformly to the same computer and do not vary from user to user. Great! This seems to be an improvement over my previous setup. If you enable this policy, users cannot set user-specific proxy settings. They must use the zones created for all users of the computer. What zones and where do I configure the proxy settings for them? I assumed the policy would simply take the user settings and apply them, but that's not what's happening. Now no proxy server is set at all. So my previous settings obviously no longer have any effect. So far, I've only come up with solutions that involved direct manipulation of the Windows registry. Which is fine, I guess, but the way the proxy is configured for users makes it appear as if there could be a higher level approach.

    Read the article

  • Can Dovecot IMAP automatically create Maildir folders for new (virtual) users?

    - by user233441
    everyone. I am learning to set up a dovecot home IMAP server using a virtual Ubuntu 12.04 machine. My intention is eventually to have a home server that uses POP3 to take email from several addresses and remove them from my ISP's servers, while making them accessible through a home IMAP server (this is similar to the setup described at https://help.ubuntu.com/community/POP3Aggregator, which explains how to set up the system with dovecot version 1, and is thus outdated). I intend to use the ISP's server directly when sending messages, and to BCC all sent messages to myself. I've completed the basic set up of the test server: getmail uses POP3 to fetch messages from two test email accounts, and successfully delivers them to the respective Maildir-style new folders on the virtual machine. Dovecot then successfully sees these messages. I have two questions: 1) I had to set up new, cur, and tmp folders for both of the test accounts manually to get this setup to work. Is there a way to get dovecot to create these Maildir folders automatically when I create a new virtual user account (e.g., when I add a user and password combination to my dovecot password file), or is it expected that I write a bash script to automate that task? 2) I would welcome any comments you have on how this approach could be improved as I learn to set it up. My motivations with this approach are 1) to enable archiving/storing emails from several hosting providers that impose a cap on server storage, and 2) to give me somewhat greater control over email storage without requiring that I set up and administrate a mail server from scratch (which I'm not yet prepared to do) (this follows the recommendations at https://ssd.eff.org/tech/email). Thank you!

    Read the article

  • What characters are illegal in Cisco IOS username secret passwords?

    - by Alain O'Dea
    I am using username secret to add users with encrypted passwords to our switches and firewall. I have been battling with the same switches and firewall for a couple of hours trying to get securely generated hard passwords for all admins. Sometimes, the passwords would go into config, but wouldn't work for login. According to the documentation for enable secret a password must not begin with a number and ? has to be entered as Ctrl-V then ? to escape it. I followed that and still got passwords I could not use sometimes. There was no error when I ran username, but the password would be rejected on login by some, but not all of the switches. They are all WS-C2960-48PST-L. The passwords it didn't like contained back ticks "`" (that character under tilde ~ under Esc). The "misbehaving" switches are running: Cisco IOS Software, C2960 Software (C2960-LANBASEK9-M), Version 12.2(50)SE5, RELEASE SOFTWARE (fc1) The "working" switches are running: Cisco IOS Software, C2960 Software (C2960-LANBASEK9-M), Version 12.2(46)SE, RELEASE SOFTWARE (fc2). The "misbehaving" switches are running a newer IOS, so this suggests a regression introduced somewhere between 12.2(46)SE and 12.2(50)SE5. I was unable to find any evidence of this being intentional in the release notes for 12.2(50)SE. I would like to avoid this next time the passwords are changed :) What characters are illegal in Cisco IOS username secret passwords? Thank you for your help :)

    Read the article

  • What is the reason for this DNSSEC validation failure of dnsviz.net?

    - by grifferz
    On trying to resolve dnsviz.net from a host using an Unbound resolver that is configured to use DNSSEC validation, the result is "no servers could be reached": $ dig -t soa dnsviz.net ; <<>> DiG 9.6-ESV-R4 <<>> -t soa dnsviz.net ;; global options: +cmd ;; connection timed out; no servers could be reached Nothing is logged by Unbound to suggest why this is the case. Here is the /etc/unbound/unbound.conf: server: verbosity: 1 interface: 192.168.0.8 interface: 127.0.0.1 interface: ::0 access-control: 0.0.0.0/0 refuse access-control: ::0/0 refuse access-control: 127.0.0.0/8 allow_snoop access-control: 192.168.0.0/16 allow_snoop chroot: "" auto-trust-anchor-file: "/etc/unbound/root.key" val-log-level: 2 python: remote-control: control-enable: yes If I add: module-config: "iterator" (thus disabling DNSSEC validation) then I am able to resolve this host normally. The domain and its DNSSEC check out fine according to http://dnscheck.iis.se/ so there must be something wrong with my resolver configuration. What is it and how do I go about debugging that?

    Read the article

  • Does ICS modify windows firewall policies in the registry?

    - by insipid
    I had a host machine I wanted to enable ICS on. First I realized that doing so was not possible until I enabled the Windows Firewall. Once I enabled the firewall and set up ICS, I noticed due to group policy I could no longer disable the firewall. Also, any ports I tried to open seemed to be ignored. Although nothing seemed to be configured when I used the mmc snap-in to view local computer policy, when I checked the registry I noticed several policies set there in HKLM (such as disabling AllowLocalPolicyMerge). I was able to remove the policies from the registry and my open ports worked, but they were eventually re-added without my input. The network I am sharing the internet from is an "unsecured" wireless network with an authentication page, is it possible that this is causing those policies to be set? Did ICS set those policies? When you go to the properties of the ICS enabled adapter and go to the ICS settings it takes you to a tab called services where you can add and remove "services running on your network that internet users can access". Is this related to the windows firewall?

    Read the article

  • Two servers, two domains, one ip. mod_proxy beginner

    - by Gutsav
    I run two virtual web servers (both running apache2 on debian). I have just one external IP, but two domains, and I want a domain going to each of the servers. I've understood that I need a Reverse Proxy, and I enabled both the mod_proxy and the mod_proxy_http modules on the "primary server". Do I need to enable anything on the "secondary server"? I also understood that I need to write some things in a virtual host file, but what? On the primary server, I have a virtual host file for one of the domains, and some for subdomains. I want domain1.tld to go to the primary server (port 80 is forwarded to it, so that works) and domain2.tld to go to the other server (internal ip 192.168.0.x). No ports needs to be forwarded to it, right? So, what to add and in which virtual host file? Or a new one? Other questions suggest adding ProxyPass and ProxyPassReverse, but I'm lost anyway, and I just don't understand the apache documentation. Thanks in advance

    Read the article

  • Linux will not activate wireless after device has been re-enabled

    - by XHR
    Using a Eee 900A netbook by Asus. By pressing Fn + F2, I can disable or enable the wireless chip on the netbook, a blue LED indicates the status. I've been able to connect to wireless networks just fine with this netbook. However, if the wireless chip ever becomes disabled, I have to reboot to get my network connection back. This generally happens when suspending. For some reason the LED will be off and I have to hit Fn + F2 for it to light up again. However, after doing so, Linux will not reconnect to the network. It simply changes the wireless status from "wireless is disabled" to "device not ready". Even worse, I've recently had issues with the chip being enabled at boot, thus making it nearly impossible to get connected. I've searched around on-line but haven't found much of anything useful on this. This happens on all kinds of different distros including Ubuntu 9.10 Netbook, EeeBuntu 4 beta, Jolicloud and Ubuntu 10.04 Netbook. Edit I noticed this question is getting a lot of views. To give a quick update, I never did resolve this issue with the given distro's. However, I'm currently running Ubuntu 10.10 netbook edition and this issue has gone away.

    Read the article

  • Isolating a computer in the network

    - by Karma Soone
    I've got a small network and want to isolate one of the computers from the whole network. My Network: <----> Trusted PC 1 ADSL Router --> Netgear dg834g <----> Trusted PC 2 <----> Untrusted PC I want to isolate this untrusted PC in the network. That means the network should be secure against : * ARP Poisoning * Sniffing * Untrusted PC should not see / reach any other computers within the network but can go out the internet. Static DHCP and switch usage solves the problem of sniffing/ARP poisoning. I can enable IPSec between computers but the real problem is sniffing the traffic between the router and one of the trusted computers. Against getting a new IP address (second IP address from the same computer) I need a firewall with port security (I think) or I don't think my ADSL router supports that. To summarise I'm looking for a hardware firewall/router which can isolate one port from the rest of the network. Could you recommend such a hardware or can I easily accomplish that with my current network?

    Read the article

  • Retrieve a user's Exchange database in powershell

    - by Paul
    Hey Everyone, I've scoured the interwebs for a few days now off and on to find this. I am creating a powershell script for email-enabling new user's(Exchange 2007). To give you a little background when we have a new hire, their AD account is created at our off-site helpdesk, but they don't create their email account. I'm trying to automate the process of mail-enabling the user which involves putting them in the same database as an existing user, disable imap pop activesync, and lastly email the requester of the ticket. I would like to just get prompted for the New User's name, User to Replicate(mailbox, storage group, database), and the person to email after it's been created. So if someone could just help with a command to Retrieve a user's Exchange database in powershell that would be great, but if people also want to help with my hacked up script please do so as well!!! Here is what I have so far: Write-output “ENTER THE FOLLOWING DETAILS” $DName = Read-Host “User Diplay Name" $RUser = Read-Host "Replicate User(Database Grab)" ***$RData = #get the Replicate user's mailbox database here*** $REmail = #either just use a Read-Host “Requester's Email address" or ask for Requester's name and pipe through their email address by digging for it w/ powershell Enable-Mailbox -Identity "$DName" -Database "$RData" Send-MailMessage -From "John Doe <[email protected]>" -To (put $REmail here which is the Requester's email) -Subject "Test Person's email account" -Body "Test Person's email account has been setup.`n`n`nJohn Doe`nGeneric Company`nSystems Administrator`nOffice: 123.456.7890`[email protected]" -SmtpServer genericexchange.exchange.com

    Read the article

  • What is the best private cloud storage setup

    - by vdrmrt
    I need to create a private cloud and I'm searching for the best setup. These are my 2 most important requirements 1. Disk and system redundant 2. Price / GB as low as possible The system is going to be used as backup setup which will receive data 24/7 over SFTP and rsync. High throughput is not that important. I'm planning to use glusterfs and consumer grade 4TB hard-drives. I have worked out 3 possible setups 3 servers with 11 4TB HDD Setup up a replica 3 glusterfs and setup each hard drive as a separate ext4 brick. Total capacity: 44TB HDD / TB ratio of 0.75 (33HDD / 44TB) 2 servers with 11 4TB HDD The 11 hard-drives are combined in a RAIDZ3 ZFS storage pool. With a replica 2 gluster setup. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) 3 servers with 11 4TB consumer hard-drives Setup up a replica 3 glusterfs and setup each hard-drive as a separate zfs storage pool and export each pool as a brick. Total capacity: 32TB (+ zfs compression) HDD / TB ratio of 0.68 (22HDD / 32TB) (Cheapest) My remarks and concerns: If a hard drive fails which setup will recover the quickest? In my opinion setup 1 and 3 because there only the contents of 1 hard-drive needs to be copied over the network. Instead of setup 2 were the hard-drive needs te be reconstructed by reading the parity of all the other harddrives in the system. Will a zfs pool on 1 harddrive give me extra protection against for example bit rot? With setup 1 and 3 I can loose 2 systems and still be up and running with setup 2 I can only loose 1 system. When I use ZFS I can enable compression which will give me some extra storage.

    Read the article

  • Wireless keeps shutting off in Windows 7

    - by Nathan Adams
    I have Windows 7 Ultimate 32bit installed on a Dell Latitude XT Tablet and for the life of me I can't figure out this really weird problem. The symptom is that the Wireless will disconnect from the AP and if I tell it to scan again, it says there are no APs in the area. I do have another wireless card in the laptop and if I disable the first one and enable the second, I am able to get onto the wireless however if I want to use the first card again I have to restart. I tried enabling/disabling the device, nothing will kick start the wireless again in the first card without a restart. I even tried different drivers. So, it seems it is random but it does occur more often when there is increased network activity (ie downloading a large file). The laptop doesn't seem to be overheating. I have tried the following: Under "Change Advanced Power Settings" for the current power profile, I set the "Wireless Adapter settings" to "Maximum Performance". Under device manger, I went to the card in question, and went to the advanced tab and set the "Power Saving mode" to "MAX_PSP" Both cards I have seem to exhibit the behavior after awhile. Both models of those cards are: Dell Wireless 1505 Draft 802.11n WLAN Mini-Card Gigabyte GN-WS30N 802.11n mini WLAN Card Has anyone have any ideas or ran into this before?

    Read the article

  • Plesk FTP not working but SFTP and Shell is working

    - by shamittomar
    I am facing a strange problem. The FTP on my Plesk VPS is not working. Whenever I try to connect, FileZilla FTP client says: Status: Resolving address of xxxxxxxxxxxxx.com Status: Connecting to xxx.xxx.xxx.xxx:21... Status: Connection established, waiting for welcome message... Error: Could not connect to server So, it's not even going to the step of asking username/password. So, it's something else. The SFTP on port 22 is working fine. Also, I can successfully do shell access and run commands. But, I NEED FTP access too on port 21. I have searched everywhere but can not find any setting to enable it. This is the Plesk version info: Parallels Plesk Panel version 9.5.2 Operating system Linux 2.6.26.8-57.fc8 CPU GenuineIntel, Intel(R) Pentium(R) 4 CPU 3.00GHz Any help is appreciated. [EDIT]: The firewall is not blocking it. I have checked it on server and there are absolutely no blocking rule. Firewall states: All incoming/outgoing connections are accepted on FTP And on client-side (my PC), I can connect to other FTP servers so this is not an issue in my PC's firewall. Moreover, I can not even connect to the FTP from online FTP clients like net2ftp.

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >