Search Results

Search found 45382 results on 1816 pages for 'two factor authentication'.

Page 361/1816 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Problem with using PHPMailer for SMTP

    - by Frozenfire
    I have used PHPMailer for SMTP and there is problem in sending mail with error "Mailer Error: The following From address failed: [email protected]" My code is as follows: $mail = new PHPMailer(); $mail->IsSMTP(); // send via SMTP $mail->Host = "localhost;"; // SMTP servers $mail->SMTPAuth = true; // turn on SMTP authentication $mail->Username = ""; // SMTP username $mail->Password = ""; // SMTP password $mail->From = $email_address; $mail->FromName = $email_address; $mail->AddAddress($arrStudent[0]["email"]); $mail->WordWrap = 50; // set word wrap $mail->IsHTML(true); // send as HTML $mail->Subject = "Subject"; $theData = str_replace("\n", "<BR>", $stuff); $mail->Body = $theData; // "This is the <b>HTML body</b>"; $mail->AltBody = $stuff; if (!$mail->Send()) { $sent = 0; echo "Mailer Error: " . $mail->ErrorInfo; exit; } i researched everything and when i debug inside class.smtp.php i found error the function "get_lines()" is returning error value "550 Authentication failed" The code was working fine previously, i am wondering how this problem came suddenly. Desperate for some help. Thanks, Biplab

    Read the article

  • PHP Mailer Class - Securing Email Credentials

    - by Alan A
    I am using the php mailer class to send email via my scripts. The structure is as follows: $mail = new PHPMailer; $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication $mail->Username = '[email protected]'; // SMTP username $mail->Password = 'user123'; // SMTP password $mail->SMTPSecure = 'pass123'; It seems to me to be a bit of a security hole having the mailbox credentials in plain view. So I thought I might put these in an external file outside of the web root. My question is how would I then assign the $mail object these values. I of course no how to use include and/or requires... would it simple be a case of.... $mail->IsSMTP(); // Set mailer to use SMTP $mail->Host = 'myserver.com'; // Specify main and backup server $mail->SMTPAuth = true; // Enable SMTP authentication includes '../locationOutsideWebroot/emailCredntials.php'; $mail->SMTPSecure = 'pass123'; Then emailCredentails.php: <?php $mail->Username = '[email protected]'; $mail->Password = 'user123'; ?> Would this be sufficient and secure enough? Thanks, Alan.

    Read the article

  • Can't authenticate mobile client with node.js (using passport.js)

    - by Pazinio
    I'm trying to build some CRUD application with node.js as a back-end API (express) and web-app (backbone) and mobile client (native android) as front-ends.(I'm node.js beginner) My server solution is based on the following great tutorial 'easy-node-authentication'. In my android app I have managed to get the user Google-Token after I completed the authentication step with Google Plus SDK.(mobile to google-plus directly request). I'm trying to understand and find right and elegant way to re-use a given google-token and authenticate again my android user through Google-Plus account to ensure the mobile client holds real token, then add a new entry (id, token, email, name) to my users table DB within my node back-end. The question is: what should be my next step in case I want to keep my back-end without changes? should I send a GET request with the token as a cookie to /auth/google? maybe to /auth/google/callback? another URL? Does this make sense at all? Please note: I'm aware to the fact the mentioned above 'easy-node-auth' solution is based on sessions and cookies. having said that, i'm still trying to understand if there is a convenient way to integrate both (android and node) as it works good for my web-app and node. Thanks in advance.

    Read the article

  • SpringFramework3.0: How to create interceptors that only apply to requests that map to certain contr

    - by Fusion2004
    In it's simplest form, I want an interceptor that checks session data to see if a user is logged in, and if not redirects them to the login page. Obviously, I wouldn't want this interceptor to be used on say the welcome page or the login page itself. I've seen a design that uses a listing of every url to one of two interceptors, one doing nothing and the other being the actual interceptor you want implemented, but this design seems very clunky and limits the ease of extensibility of the application. It makes sense to me that there should be an annotation-based way of using interceptors, but this doesn't seem to exist. My friend has the idea of actually modifying the handler class so that during each request it checks the Controller it is mapping the request to for a new annotation we would create (ex @Interceptor("loginInterceptor") ). A major point of my thinking is the extensibility, because I'd like to later implement similar interceptors for role-based authentication and/or administration authentication. Does it sound like my friend's approach would work for this? Or what is a proper way of going about doing this?

    Read the article

  • Using Read-Only Fields in a C# WebBrowser

    - by TheDramaLlama
    I'm currently using a WebBrowser control in a C# WinForms application, and attempting to control some variability presented with this control. Basically, my users log in to a separate UI provided by my application, which then displays the WebBrowser control, navigates to a predetermined log-in URL, and then auto-fills the username and password fields on that page. However, in order to prevent unpredictable behavior in this WebBrowser control, I want to make these username and password text boxes read-only after they are auto-populated. Essentially, I want the user to see a browser page that has been filled out for them, and that cannot be edited. (This is so that any authentication errors can be handled by my application as opposed to the browser.) The code I'm currently using to populate the text fields and make them read only is as follows: webBrowser1.Document.GetElementById("username").InnerText = username; webBrowser1.Document.GetElementById("password").InnerText = password; webBrowser1.Document.GetElementById("username").Enabled = false; webBrowser1.Document.GetElementById("password").Enabled = false; Unfortunately, when I try to make the fields read-only, the authentication server acts like the password field was not filled out, and prompts the user to fill it out again after the "Submit" button is clicked. Is this expected behavior, and if so, what other methods can I try to prevent users from changing the credentials that the browser was auto-populated with?

    Read the article

  • How do I refer to a client_deploy.wsdd file that's in WEB-INF?

    - by Paul
    A basic question, but I can't seem to find the answer. I have an Axis-generated web service that also calls another web service (for which the stubs are also generated with Axis). It's deployed in weblogic 9.2 That called web service requires authentication. I've googled for the code to set up authentication. It requires that I set up a client_deploy.wsdd file which I've done, and added it to WEB-INF. I need to specify this flle to Axis. There seem to be several ways of doing this, including System.setProperty("axis.ClientConfigFile", "client_deploy.wsdd") or EngineConfiguration config = new FileProvider("client_deploy.wsdd"); but these aren't working for me. Is the issue the path for the client_deploy.wsdd file? How do I refer to a file that's at the top level of the WEB-INF directory? Googling tells me how to access it as a stream, but I don't want that, I need to pass a file name to these functions... Please point out the obvious that I have missed

    Read the article

  • IIS7.5 and MVC 2 : Implementing HTTP(S) security

    - by Program.X
    This is my first ASP.NET MVC application, and my first on an IIS 7.x installation whereby I have to do anything over and above the standard. I need to enforce Windows authentication on the /Index and /feeds/xxx.svc pages/services. In ASP.NET Web Forms, I would apply the Windows permissions on the files and remove Anonymous authentication in IIS 6. This needs to work over HTTP/S, but don't worry about that, that's in hand. What happens in MVC/IIS 7? I have tried modifying the permissions on the /Index.aspx view, which seems to block access. It asks me for a username/password, but does not grant access when I enter a valid username/password. Pressing Escape gives me an exception "*Access to the path 'E:\dev\xxx\xxx.ConsultantRegistration.Web.Admin\Views\ConsultantRegistration\index.aspx' is denied. *", which does get sent as a 401. So although the username/password does exist on the Index.aspx view, I can't use those credentials to access said view. I have in my web.config: What am I missing?

    Read the article

  • getting windows username with javascript

    - by jbkkd
    I have a site which is built in ASP.net and C#. Let's call it webapp. it uses a Form system to log on into it, and cannot be changed easliy. I got a request to change the log in to some kind of windows authentication. I'll explain. Our windows login uses active directory for users to log into their windows account. their login name is sXXXXXXX. X are numbers. in my webapp, I want to take the users numbers from their active directory login, and check if those exist in the webapp database. if it exists, they will automatically log in. If it doesn't, they will be referred to the regular login page for the webapp system which is currently in use. I tried changing my IIS to disable anonymous login and enabling windows authentication, therefore making the user browser to send it's current logged in user name to my webapp. I changed the web config as well from "Forms" to "Windows", which made my whole webapp obsolete as the whole forms system did not work. My question is this - is there a different way for the browser only to send the username to my webapp? I thought maybe javascript, I just don't know how to implement that, if it's even possible. I know it's not very secure, but all this platform and system is built outside the internet, it's on a private network.

    Read the article

  • How to set up WebLogic 10.3.3. security for JAX_WS web services?

    - by Roman Kagan
    I have quite simple task to accomplish - I have to set up the security for web services ( basic authentication with hardcoded in WLES user id and password). I set the web.xml (see code fragment below) but I have tough time configuring WebLogic. I added IdentityAssertionAuthenticator Authentication Provider, set it as Required, modified DefaultAuthenticator as Optional and I went to deployed application's security and set the role to "thisIsUser" and at some point it worked, but not anymore (I redeployed war file and set web service security the same way but no avail.) I'd greatly appreciate for all your help. <security-constraint> <display-name>SecurityConstraint</display-name> <web-resource-collection> <web-resource-name>ABC</web-resource-name> <url-pattern>/ABC</url-pattern> </web-resource-collection> <auth-constraint> <role-name>thisIsUser</role-name> </auth-constraint> </security-constraint>

    Read the article

  • Is OpenID too complicated?

    - by John Leidegren
    I'm beginning to seriously doubt the OpenID community despite that fact that it works. I'm in the process of currently evaluating OpenID as an authentication service for 'this' site and while the promises are great, I just can't get it to work. And I'm really lost. I ask of the SO community to help me out here. Give me answers and show me examples so I can leverage this in the way it was meant to be. My scenario is very typical. I want to authenticate users through a specific Google Apps domain. If you have access to this Google Apps domain, then you have access to my web application. Where I get lost, is all the prerequisites and dependencies involved. What is XRD? What is Yadis? Why do I need XRD and Yadis? What do I need to do to deploy OpenID authentication on my website? Also, this is really important to me. When I login to SO, I use my Google Account. When I click the login button I'm presented with this confirmation page. Where I'm granting SO the right to use my Google Account credentials. Somehow, Google knows that it's "Stackoverflow.com" that's asking me if it's okay to login. And I wish to know what manner of control I have over this little text. I intend to deploy OpenID on several different domains but I would prefer if they would all work without having to be individually configured with special parameters, such as secret API keys and what not. However, I don't know for sure if this is a prerequisite of OpenID, that or the Federated Login API that Google provides.

    Read the article

  • Joomla User Login Question

    - by user277127
    I would like to enable users of my existing web app to login to Joomla with the credentials already stored in my web app's database. By using the Joomla 1.5 authentication plugin system -- http://docs.joomla.org/Tutorial:Creating_an_Authentication_Plugin_for_Joomla_1.5 -- I would like to bypass the Joomla registration process and bypass creating users in the Joomla database altogether. My thought had been that I could simply populate a User object, which would be stored in the Session, and that this would replace the need to store a user in the Joomla database. After looking through the code surrounding user management in Joomla, it seems like any time you interact with the User object, the database is being queried. It therefore seems like my initial idea won't work. Is that right? It looks like, in order to achieve the effect I want, I will have to actually register a user from within the authentication plugin at the time they first login. This is not ideal, so before I go forward with it, I wanted to check with Joomla developers whether it is possible to do what I described above. Thanks in advance -- I am new to Joomla and greatly appreciate your help!

    Read the article

  • Why does LogonUser place user profiles in c:\users of the server?

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior? I have tried LOGON32_LOGON_BATCH but it was giving an error 1385 in authentication user. I need either of the solution 1) Is there any way to stop the folder generation. 2) What parameter I need to pass this to work? Thanks

    Read the article

  • WIN32 API question - Looking for answer asap

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior? I have tried LOGON32_LOGON_BATCH but it was giving an error 1385 in authentication user. I need either of the solution 1) Is there any way to stop the folder generation. 2) What parameter I need to pass this to work? Thanks

    Read the article

  • SBS 2008 BPA Warnings After Migration From SBS 2003

    - by Nicholas Piasecki
    We just finished a we-know-just-enough-to-be-dangerous migration from SBS 2003 to SBS 2008, and things seem to have gone relatively smoothly. After running the SBS 2008 Best Practices Analyzer on the destination server, we've got three warning messages, and I can't tell if they're important or not. First, the easy one: SMTP Port (TCP 25 Status): The Edgetransport.exe process should listen on SMTP port 25, but that port is owned by the process. I don't think that this one is a big deal--e-mail is flowing through the SMTP connector. Since there are two spaces between "the" and "process," I'm assuming that for some reason BPA just couldn't figure out the owning process name and this is just some sloppy programming when displaying the message. (Indeed, on subsequent runs of the BPA this message goes away, and other times it comes back.) Now, two more scary sounding ones: No DNS name server records: There are no DNS name server (NS) resource records in the _msdcs sub-domain in the forward lookup zone for Windows SBS 2008. and, similarly, No DNS name server records: There are no DNS name server (NS) resource records in the _msdcs zone for Windows SBS 2008. Now for these two, everything appears to be functioning correctly--but I'm assuming this is a weird state as a result of the SBS 2003 to 2008 migration. Can anyone provide any pointers on how to fix it, or whether or not it can be safely ignored? Thanks!

    Read the article

  • Home Networking Questions

    - by Eddie Parker
    Hello: I'm looking to wire my home with CAT-X (where X is probably going to be CAT-6, unless someone can convince me differently. ;) ). I'm seeking advice on what equipment I'll need for the job, and any things I should watch out for. It's a two story half-duplex I'll be wiring, roughly about 1800 sq ft. Here's what I believe I need so far: Bulk CAT-6 Ethernet cabling CM Rated Gigabit switch(es?) Patch panel Equipment for cutting, terminating wire, fishing through walls, etc Wall outlet covers, etc. Questions I have: Does it matter the MHz rating on the Ethernet cable? If so, why? I have two gigabit switches currently, an 8-port and a 5-port. Should I buy one massive switch to cover all the connections I need, or should I just chain the two together and buy a switch for however many other connections I need? Do I really need a patch panel? I understand it keeps the cables looking cleaner than coming out of a hole in the wall, but is there some other product I can use, perhaps combining a switch with a patch panel or some such? Ideally I'll have all this running out of a relatively small closet, so the less components (or smaller) the better. Any advice, links, or suggested product to use/avoid would be appreciated!

    Read the article

  • Google Apps for Domains, Multiple Domains

    - by belliez
    I have a primary google apps for domains account which I use for my personal email, calender, docs etc and is great. I also receive my pop3 company email via settings-Get mail from other accounts in my account. Due to spam I want to make use of gmail servers for my company email and have two options: [1] Add my second domain as a domain alias [2] Create a new apps for domains account If I do [1] above do I access (send and receive) my company email as if it was a separate account or is it merged into my primary domain. I want the two seperated. If I perform [2] can I share my contacts / calender between the two? I also have Act! contact manager which syncs to my primary domain and it is getting messy now with personal and work contacts being changed / sync'd to my Act CM software. I want to try and separate my personal and work contacts (but make the work them avaiable in my primary domain). Hope this makes sense! Your suggestions are gratefully accepted. Thank you

    Read the article

  • Dual NVidia graphics cards in Ubuntu / xorg.conf mania

    - by John Zwinck
    I have two NVidia graphics cards: Quadro NVS 295 (PCI Express, dual DisplayPort outputs) GeForce FX 5200 (PCI, DVI and VGA outputs) I have three identical monitors, two on DisplayPort and one on DVI. I'm on Ubuntu Hardy (and cannot currently dist-upgrade for separate reasons). I use the "nvidia" driver. What's new is the GeForce card and the third monitor. I currently have the dual DisplayPort monitors working fine. Here are the display-related parts of my xorg.conf: Section "ServerLayout" Identifier "Default Layout" Screen "PCI-Express Screen" 0 0 # adding this makes X fail to start: Screen "PCI Screen" 0 Inputdevice "Generic Keyboard" Inputdevice "Configured Mouse" EndSection Section "Module" Load "glx" # not sure why/if this is needed EndSection Section "Monitor" Identifier "DELL 2408WFP" Option "DPMS" EndSection Section "Device" Identifier "NVIDIA Quadro NVS 295" Driver "nvidia" Option "RenderAccel" "true" Screen 0 BusID "PCI:2:0:0" EndSection Section "Device" Identifier "NVIDIA GeForce FX 5200" Driver "nvidia" Option "RenderAccel" "true" Screen 1 BusID "PCI:6:4:0" EndSection Section "Screen" Identifier "PCI-Express Screen" Device "NVIDIA Quadro NVS 295" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+1200, 1920x1200 +0+0" EndSection Section "Screen" Identifier "PCI Screen" Device "NVIDIA GeForce FX 5200" Monitor "DELL 2408WFP" Defaultdepth 24 Option "TwinView" "True" Option "UseEdidFreqs" "True" Option "MetaModes" "1920x1200 +0+0" EndSection I use nvidia-settings to configure my monitors, and it does not show the second GPU. lspci, though, shows: 02:00.0 VGA compatible controller: nVidia Corporation Unknown device 06fd 06:04.0 VGA compatible controller: nVidia Corporation NV34 [GeForce FX 5200] Which is where I got the BusID settings for the two devices (when I just had one device, I didn't have any BusID listed...and adding the BusID hasn't broken anything). What am I missing? How can I make nvidia-settings show my second GPU so I can then configure its monitor?

    Read the article

  • SATA drives or chipset throwing DRDY ERR and ICRC ABRT

    - by Matt
    I have an SD-VIA-1A2S PCI card with 2 sata ports (and one ATA-133 that isn't used). Two new Western Digital Caviar Green drives (WD10EARS 1TB) throw repeated errors in kern.log (removed date/time/host info for brevity): [ 7.376475] ata2.00: exception Emask 0x12 SAct 0x0 SErr 0x1000500 action 0x6 [ 7.376480] ata2.00: BMDMA stat 0x5 [ 7.376483] ata2: SError: { UnrecovData Proto TrStaTrns } [ 7.376489] ata2.00: cmd c8/00:40:20:00:00/00:00:00:00:00/e0 tag 0 dma 32768 in [ 7.376490] res 51/84:2f:20:00:00/00:00:00:00:00/e0 Emask 0x12 (ATA bus error) [ 7.376493] ata2.00: status: { DRDY ERR } [ 7.376495] ata2.00: error: { ICRC ABRT } [ 7.376504] ata2: hard resetting link I'm using Ubuntu 9.04 - 2.6.28-18-generic, though I have tried live cds of Ubuntu 9.10, Fedora 12 and OpenSUSE 11.2 - all running various 2.6.31 kernels - and all received the same error. Based on testing these drives and this card in two other machines and combos of connecting the drives directly to the motherboard or the add-in card, I'm relatively convinced that it's the VIA chipset that is the problem. Another computer that also has an onboard VIA SATA chipset (like the add-in card) produces the same errors when the drives are directly on that motherboard. I have been able to verify that the drives are perfectly good, and I tried everything I can think of in terms of swapping cables, psu isn't overloaded, etc. The error happens on boot once or twice, after using fdisk on the drive once or twice, and constantly when attempting to sync a new mdadm raid 1 array created on the two drives. Any thoughts on where to go from here - driver/kernel wise? I'm completely open to buying a new PCI add-in card if someone can recommend one with 2 internal sata ports that works well in Debian/Ubuntu. Thanks!

    Read the article

  • 16-bit MS-DOS Subsystem: csrss.exe

    - by Wesley
    Hi all, I just booted up my Samsung N120 netbook (with Windows XP Home SP3) and a dialog box came up with a command prompt window behind it. The dialog box is titled 16 bit MS-DOS Subsystem and the message is as follows: C:\DOCUME~1\SAMSUNG\csrss.exe The NTVDM CPU has encountered an illegal instruction. CS:0544 IP:0117 OP:63 00 64 00 34 Choose 'Close' to terminate the application. This only started on my most recent boot-up. One thing to note is that when I downloaded the Dropbox installer and opened it up, Panda Cloud Antivirus detected a suspicious file, which was csrss.exe and "neutralized it." However, an actual virus or trojan was not detected immediately before the file was detected and neutralized. Just under two weeks ago, a trojan and two viruses were detected for some odd reason. (I only went to website I knew and I do not torrent or browse adult sites.) Anyhow, the two viruses came up in temporary files and the trojan was "neutralized." Anyways, the main question is: How can I repair the csrss.exe file such that Windows XP starts up properly? A screenshot could be posted upon request. Thanks in advance!

    Read the article

  • bind9 "error sending response: host unreachable"

    - by wolfgangsz
    of course), I have a number of DNS servers, all running bind9 (9.5.1, to be specific) under fedora. 4 of them are slaves, fed by a common master for our public DNS. These are all located on the public gateways of our various offices. One of them has tons of messages in its log files similar to these: Jul 21 17:26:18 gateway named[3487]: client 10.171.3.8#52500: view internal: error sending response: host unreachable I wonder where that comes from. The firewall is open on port 53 between the two machines (10.171.3.8 is an internal DNS server located on a Windows Domain Controller). The internal domains do NOT list the gateway as a name server (so there should not be any attempts of replicating the domains), and the gateway does not handle any internal DNS. The clients in these messages vary between the two domain controllers on the internal network and a third internal name server (running bind9 on debian in a different segment of the network). Any pointers are highly welcome. In response to the first reply: The issue with this really is that tcpdump doesn't show any problems. Here is an extract from "tcpdump -i any port 53" 09:13:38.283308 IP valine.aminocom.com.61815 ns-pri.ripe.net.domain: 14075 PTR? 166.225.58.95.in-addr.arpa. (44) 09:13:42.007410 IP gateway-eng.aminocom.com.37047 alanine.aminocom.com.domain: 35410+ PTR? 12.3.172.10.in-addr.arpa. (42) At the same time, the DNS log shows: Jul 22 09:13:38 gateway named[3487]: client 10.171.3.6#61300: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.172.3.12#56230: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.171.3.8#55221: view internal: error sending response: host unreachable Jul 22 09:13:49 gateway named[3487]: client 10.171.3.8#51342: view internal: error sending response: host unreachable So clearly at 09:13:40 there were two unsuccessful attempts to connect to internal machines (10.172.3.12 and 10.171.3.8, both are DNS servers), but nothing in the tcpdump output.

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • How To Set Up A Loadbalanced High-Availability Apache Cluster On Windows

    - by bReAd
    Setting up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another “Single Point Of Failure”, we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently. The following setup is proposed: Apache node 1: webserver1.example.com (webserver1) – IP address: 192.168.0.101; Apache document root: /var/www Apache node 2: webserver2.example.com (webserver2) – IP address: 192.168.0.102; Apache document root: /var/www Load Balancer node 1: loadb1.example.com (loadb1) – IP address: 192.168.0.103 Load Balancer node 2: loadb2.example.com (loadb2) – IP address: 192.168.0.104 Virtual IP Address: 192.168.0.105 (used for incoming requests) Currently, there are many solutions for Linux machines and there aren't any on windows. I've tried searching a long time for solutions on Windows platform How do I create the virtual IP in windows and perform monitoring and make the load balancer listen to the virtual IP Address?

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • VMware virtual machine network devices malfunctioning

    - by sheepz
    I'm running Ubuntu 10.04 LTS and VMvware workstation 7.0.1 build-227600. The virtual machine i'm running in VMware is a custom distribution built on Debian Linux version 3.1. I'm still pretty much a beginner with UNIX administration. After having messed around with the vmware (changed only the name of the folder, the vmx and and other .v* files accordingly in which the .vmx was situated, and the configuration in the vmx file accordingly), the network devices on the virtual machine do not work anymore. The virtual machine is used for securely sending messages. The virtual machine: As far as I know, this perl file called proxy-gen-ifalias eth0 is responsible for properly setting up the two virtual network devices eth0 and eth1. The Virtual machine comes with a GUI interface in which I have set up two ethernet network devices, one internal, the other external. Now, after having messed around with this, the UI gives me this error message: perl proxy-gen-ifalias eth0 /etc/modprobe.d/alias-eth0 /sbin/update-modules perl proxy-gen-ifalias eth1 /etc/modprobe.d/alias-eth1 /sbin/update-modules ifdown eth0 ifdown: interface eth0 not configured ifdown eth1 ifdown: interface eth1 not configured perl proxy-gen-netcfg /etc/network/interfaces ifup eth0 SICCSIFADDR: No such device eth0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device eth0: ERROR while getting interface flags: No such device Failed to bring up eth0. ifconfig eth0 eth0: error fetching interface information: Device not found make: *** [/etc/network/interfaces] Error 1 ~ Here are the contents of the two perl files referred to in the message: paste.pocoo.org/show/2AMzAYhoCRZqlGY7wUFk/ proxy-gen-netcfg

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >