Search Results

Search found 7731 results on 310 pages for 'exit failure'.

Page 194/310 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Diagnose remote desktop freezes in Windows 7 when no BSOD?

    - by Paul Smith
    Okay, I'm getting no joy from Asus or Microsoft on this, so hoping for some clues on how to narrow down the cause. I have very frequent OS freezes, always & only when running Remote Destkop Client (mstsc) in Windows 7 x64. I never have a bluescreen, and there is never a minidump. The display & input just freezes -- no keyboard, no mouse, and sound will just continue the last wavelength if any. So far, I can't find a way to trap the hang given that there's no bluescreen; advanced startup & recovery settings for system failure are "Write an event" checked, "Automatically restart" checked, and "Kernel memory dump". I've updated to the lasted BIOS, and tried a few different graphics drivers, both generic & ATI. I've also tried disabling Aero, and everything about the remote desktop experience (incrementally unchecked every box in the mstsc - options - experience tab), even disabled/unplugged external monitor to make sure it wasn't a dual-monitor issue. My specs are: Asus G73jh notebook 8GB RAM ATI Mobility Radeon HD 5800 Series graphics (recently tried driver versions 8.791.0.0, 8.801.0.0) American Megatrends G73jh.211 BIOS (7/27/2010) Windows 7 Home Premium x64 Windows Memory Diagnostic passed all of the following at least 3 times with no errors: MATS+ INVC LRAND Stride6 WMATS+ WINVC This notebook is better than most at removing heat (laudable vent design), so I'm not inclined to suspect thermal causes (especially since running 1080p video for hours has never caused a freeze, but mstsc does, reliably, within 5 minutes to an hour). This did seem to start happening after a Windows Update, but I've since reverted every patch applied since a week before the first occurrence, with no joy. (And I'd only had the PC for a couple weeks before that, so it could have been chance + less actual time spent remoting at the beginning.) I'm at my wits end, and I bought this laptop primarily as a remote terminal client (go figure, right?) Any ideas on how to identify the cause of this? Thanks!

    Read the article

  • Server 2008 Print Redirection is failing but only on 16Bit apps

    - by ian
    the main programmer for SoEasyAccounting and we are installing to Server 2008 Standard service pack 1. We install to 2003 with no problems. Important to understand that the print failure only happens in certain circumstances: Note: We use a standard Windows printer selection box to choose the printer Terms used Superbase = a program language that uses ntvdm.exe (Windows process hosting 16 bit apps) Local Printer = printing to a driver loaded onto the Server 2008 Redirect Printer = printing to a automatically established remote printer through an RDP connection Printing Scenarios Server 2008 - 1: Print from notepad to a Redirected Printer = works Server 2008 - 2: Print from Superbase to a Local Printer = works Server 2008 - 3: Print from Superbase to a Redirected Printer = fail Server 2003 - 4: Print from Superbase to a Redirected Printer = works Results The print causes a message in the drivers print queue of Local Downlevel Document, no print though and Superbase recognises that the "Print command failed". Eventvwr has no related issues to the print fail Any help greatly appreciated. So far 2 days spent trying to resolve and here goes my weekend :( unless someone has an idea :) Things I have Tried i. Switching on/off Easy print ii. Loading copy of redirected driver on server

    Read the article

  • what is the reason behind window service stopped ,whether its due to LAN problems or any other issues

    - by Steve
    I have a windowservice which named Trunk which stopped one day i just want to know the reason behind it? this is an entry in the logs, Nov 15 17:54:04.318 :Trunk-1516:Trunk:handle_control_event:Received CTRL_LOGOFF_EVENT, ignore it Nov 25 15:54:52.157 :Trunk-1516:Trunk:ERROR - Process Restart Count (5) Exceeded for:C:\Program Files\secon\11.1.4\bin\vmd Nov 25 15:54:52.157 :Trunk-1516:Trunk:Stopping Trunk ... Nov 25 15:54:52.314 :Trunk-1516:Trunk:Shutting down, signaled C:\Program F Nov 20 15:54:20.345 :SCBridge.RegisterBridge:Exception in method: ScUtility.ScCommandException (0xa08990002): Exception from HRESULT: 0xa08990002 Supplemental Information: None available. at ScServer.ScServiceProcessorRegistryManager.Attach(String serviceProcessor, ScClientInformation clientInfo, FORCE_ATTACH_SPEC forceAttachToMaster) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor, Object clientInfo) at ScServer.ScServiceProcessorRegistry.Attach(String serviceProcessor) at ServerControlInterface.SCBridge.RegisterBridge(String SPName) for system APOLLOSP0 attempting to attach and register with the Bridge i had seen service is registered with specific account, so i thought that user logged off from the machine that may be the reason behind it or any LAN disconnection problem . But Having taken another look at the above entry we seem to have a constant failure being generated in vmd which causes Trunk to detect vmd requires a restart. Most of the time it works OK and the restart count is anything up to 4. In this case the Trunk log confirms that the Restart Count is 5 and so is considered to be exceeded. Presumably, this triggers the termination of the other services and Trunk is actually doing its job.So, coould this just be a timing issue and we need to increase the tolerance level (i.e restart count) or do we need to address the 0xa08990002 error in vmd?

    Read the article

  • backup an existing linux server to a virtualbox virtual machine

    - by user146526
    I have some servers and VPSs to many companies across the world. I want to back them up locally. I have some backup solutions enabled to remote hosts, but I want to have a local backup on a computer at home. What I am thinking is: 1) Create a virtualbox virtual machine, install the same version linux as the server. 2) Use rsync to backup the server to the local virtualbox machine. (something like rsync -av --delete --progress --exclude '/dev/' --exclude '/proc/' root@server_ip:// / ) 3) Repeat the command every few days update files. 4) In case of a hard disk failure, or any other bad event, reverse the rsync command and get the files back and continue my bussiness. I tried it with 2 openvz VPS, the one was a backup of the other. I also tried to transfer normal linux server host to openvz machine and it worked great. That way looks pretty clean and easy to me, this is the kind of solution I am looking for. However I need to be sure that this will work if I am going to do it. The question is, will that work ok ? Does anyone see any problem with that ? Do you have any other suggestions ? Thanks

    Read the article

  • Some of my keys are automatically being pressed along with other keys

    - by Santosh
    History The last time when my computer shutdown was a power failure. Now some keys are automatically being pressed when I type something. The last thing I did to keyboard setting was adding a keyboard layout (on Ubuntu). What is happening Whenever I press c, xc is writeen s gives me sd d gives me sd e gives me we 2 gives me 23, So when I want @ it gives me @# 3 gives me 23 Pressing CAPS Lock gives me F3 and vice-versa. All other key are either working fine or I don't use them. I have two operating system Ubuntu and Windows, I use Windows very less and found this problem on Ubuntu, but as soon as I logged in to Windows (for checking) then I found that Windows has the same problem. Effects on my life This starts form the time of login, even I have problem in typing my password. Whenever I try to save any webpage, it is bookmarked automatically. Whenever I copy, it is cut automatically. I have to spend more than half of time correcting what I have typed. Note: Typing thisd quwesdtion wasd rweally a big pain to mwe.

    Read the article

  • How to setup RAID 1 with Intel RST on an existing Windows 7 system?

    - by instcode
    I'd like to setup RAID-1 using Intel Rapid Storage Technology on my Windows 7 64-bit system. I have an 1TB SATA HDD with Windows 7 system installed on the first primary partition (leftmost, ~200GB). The rest of this HDD is unallocated (~800GB). I bought another 2TB SATA, then created a primary partition (leftmost, ~500GB) and filled my data in. The rest of this HDD is unallocated (~1.5TB). A quick disk layout (XXX is the unallocated region): HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXXXXXXXX ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXXXXXXXX ] Now, I want to create a 500GB RAID-1 partition (I'm not sure if using "partition" is correct here) on the rightmost of the 2 HDDs above without losing any existing data from both disks. Here is the expected layout: HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXX | 500GB D:\ DATA - RAID-1 ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXX | 500GB D:\ DATA RAID-1] Let's not concern about data lost, is it possible to have that final layout using Intel RST? Previously, I tried this layout using dynamic disk & software RAID from Windows and it worked as expected, however, it's quite ugly in resynching after an OS failure that I don't want. If yes, is there a way to keep the data on existing partitions untouched or, at least, it should keep the SYSTEM partition safe (I'm okay if the PROGRAM partition has to be gone.)? Well, are there any strict/special steps I should follow when using the Intel RST manager in order to achieve that? If none of those questions above are "Yes", could you please suggest some other possible layouts that leave the C:SYSTEM partition untouched?

    Read the article

  • IIS6 site using integrated authentication (NTLM) fails when accessed with Win7 / IE8

    - by Ciove
    Hi, I'm having pretty similar problems as described in case 139099, but the fix there doesn't seem to work for me. Here's the details: Server: Win2003Srv R2 SP2 (stadalone, not a member of a domain). IIS6, TCP/443 (https). Anonymous access disabled. Integrated Windows authentication enabled. Local useraccouts Each useraccount has own virtual folder with change access and read access to site root. The 'adsutil NTAuthenticationProviders "NTLM"' -thing set to site root and useraccount's virtual folder. Client: Win7 Enterprise Member of a AD-Domain IE8 Allows three login attepts then fails. Using [webservername][username] in the logon window (Windows security) Logon using other browsers (Chrome and Firefox) works OK. The Web services log shows one 401.2 and two 401.1 events. The Security Event log shows two events, first is Fauilure Audit (680), The second event is Fauilure Audit (529) with these details: Logon Failure: Reason: Unknown user name or bad password User Name: [username] Domain: [webservername] Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: [MyWorkstation] Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: [999.999.999.999] Source Port: 20089 Any ideas appreciated.

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • Microsoft Deployment Toolkit 2012 Error

    - by Jacob Schaer
    I just started with MDT2012 recently in hopes of eventually getting away with using Ghost to deploy all of our department computers. When I test deploy in VirtualBox, it deploys the OS properly, but stops because of a network driver failure (it gets the "could not allocate resources" issue). On physical hardware (Latitude E6500, Optiplex 980, and an older Latitude) it gets through the multicast and stops immediately after with: "Setup was unable to create a new system partition or locate an existing system partition. See the Setup log files for more information" I've looked at the logs and never see anything really of note. Originally I was using DriverPacks from DriverPacks.net, but thinking it was a driver issue, I switched over to using Dell's cab driver packs. Still the same issue. I check and it did the HDD is all fine - it was properly partitioned, set to bootable, and was loaded with all the proper OS installer files. I'm using a flash drive to do the install - when I make changes to the deployment share I rebuild and copy the ISO to the drive, then use YUMI multiboot to start the ISO (probably irrelevant).

    Read the article

  • Linux RAID: Replacing Failed Drive...permanantly

    - by user137519
    Okay, odd question here. I have a server with RAID 5. A drive failed, in a really physically in a really odd way. On the machine it boots and is seen by the BIOS but...no partition can be seen on the drive consistantly (in and out). 2 out of 3 drives working...I made new spare disk and added it, RAID 5 rebuilt clean. All appears well but...when I reboot it keeps trying to use the 2nd drive which doesn't give any partition data, so of course the RAID 5 gets 2 out of 3...again. The status of my drive is as follows: /dev/sda2:Good /dev/sdb2 (drive has physical problem so no partition data) bad, /dev/sdc2:good /dev/sdd2:good. Every time I reboot the mdadm system seems to keep trying to use /dev/sdb which has physical failure (although spins and is detected). /dev/sdd is the new drive I created. I added /dev/sdd to the raid and it rebuilds the raid but this action isn't memorized upon reboot so it keeps listing /dev/sda and /dev/sdc but doesn't use the perfectly good /dev/sdd until I re-add manually. I've tried removing the dead drive with the mdadm tool, but as it cannot see /dev/sdb paritions it will not fail or remove it (says partition doesn't exist). the /etc/mdadm.conf was automatically made on the original OS install which only lists: DEVICE partitions MAILADDR root ARRAY /dev/md2 super-minor=2 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 Basically just the raids to use on boot. I need to remove this semi-dead drive (/dev/sdb) but I'd prefer to know why this is happening before I do. any ideas or suggestions. I supposed I could attempt to clone/replace /dev/sdb (the partitions on drive show up, then disappear shortly after) but given the partition "chester cat" behaviour this seems risky to me and as I have a working "spare" it seems unnecessary. Thanks in advance for your insight.

    Read the article

  • Restore a database with LDF file only

    - by Martin
    First of all, i know how stupid it is not to have a any backup. I can't help it, but i have to (try) to solve it. I have a transaction log (LDF) file from a SQL Server 2000 database that contains all transactions since the creation of the database. No truncation has been done. The MDF file is gone. Probably because of some disk failure. There is no backup. Not from the original database and not from the transaction log. I have tried to link the transaction log to a new clean database. But (ofcourse) that failed because SQL Server checks the identity of both files. I have read about software that can read the transaction log. ApexSQL seems to do that. I tried to install the trial version but it gives weird errors when trying to start the program. Anyone knows a solution for me? It may contain third party software, but i prefer a clean SQL Server solution.

    Read the article

  • Powershell Script Scheduled Task Stopped Running (Could not Start)

    - by Hatsune Yuki
    I'm running a scheduled task (for Powershell Script) on Windows 2003 Server. I believe the script works fine. The task is scheduled to run every 10 minutes from 7:00am to 11:50pm everyday. However, it never gets to run more for than a day. It always stops some time in the afternoon (between 2pm and 6pm). I'm not sure exactly what happened but I always get the error The attempt to log on to the account associated with the task failed, therefore, the task did not run. The specific error is: 0x80070569: Logon failure: the user has not been granted the requested logon type at this computer. Verify that the task's Run-as name and password are valid and try again. It seems like most people with this error are saying that they need to make user "logon as a batch job". However, this option is greyed-out for me. I search for other places where users have similar problems but the solutions are not written in detail (some of them have something to do with GPO). I've only used the basic features of Windows Server and I have no clue how to get to the place they are referring to. Can someone please confirm whether "logon as a batch job" is indeed a solution and provide a detailed walkthrough on how to solve my problem? Thanks. p.s. someone suggested the website http://technet.microsoft.com/en-us/library/cc755659(v=ws.10) I tried to followed the method for web server with domain. However, got stuck on the 6th step where it mentions Group Policy Object. I don't know where it is.

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Sync, share and backup policy using NAS

    - by Cue
    Trying to come up with a way to keep in sync while sharing and keeping a backup of my music/photos and movies. Currently I have an iMac in Greece and a MBP with me in the UK. As a result I've ended up with 2 iPhoto and iTunes libraries not to mention Documents scattered here and there, user settings etc. I also like to have a backup in case of a drive failure or the need to clean install. It seems that iPhoto and iTunes don't work really well with networked libraries. The way I think about it is to have a NAS where I keep my iTunes and iPhoto library but also rsync daily to my MBP to have a local copy. That way my files are shared across the network as well as act like a backup. In addition I get to have my files wherever I take my MBP but also have the ability to clean install. The tricky part comes from keeping in sync the iMac which is miles away. Again I'm considering a mirror setup (NAS, rsync to the iMac) as well as an rsync between the two NAS. It pretty much resembles the way Dropbox works, sans the requirement to go through their servers but I'm no "superuser" and don't really know if it is even feasible to have such a setup. Looks like there are so many things that can go wrong.

    Read the article

  • Redirecting or routing all traffic to OpenVPN on a Mac OS X client

    - by sdr56p
    I have configured an OpenVPN (2.2.1) server on an Ubuntu virtual machine in the Amazon elastic compute cloud. The server is up and running. I have installed OpenVPN (2.2.1) on a Mac OS X (10.8.2) client and I am using the openvpn2 binary to connect (in opposition to other clients like Tunnelblick or Viscosity). I can connect with the client and successfully ping or ssh the server through the tunnel. However, I can't redirect all internet traffic through the VPN even if I use the push "redirect-gateway def1 bypass-dhcp" option in the server.conf configurations. When I connect to the server with these configurations, I get a successful connection, but then an infinite series of error messages: "write UDPv4: No route to host (code=65)". Traffic routing seems to be compromised because I am not able to access anything anymore, not even the OpenVPN server (by pinging 10.8.0.1 for instance). This is beyond me. I am finding little help on the web and don't know what to try next. I don't think it is a problem of forwarding the traffic on the server since, first, I have also took care of that and, second, I can't even ping the VPN server locally through the tunnel (or ping anything at all for that matter). Thank you for your help. Here is the server.conf. file: port 1194 proto udp dev tun ca ca.crt cert ec2-server.crt key ec2-server.key # This file should be kept secret dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 And the client.conf file: client dev tun proto udp remote servername.com 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert Toto5.crt key Toto5.key ns-cert-type server comp-lzo verb 3 Here is the connection log with the error messages: $ sudo openvpn2 --config client.conf Wed Mar 13 22:58:22 2013 OpenVPN 2.2.1 x86_64-apple-darwin12.2.0 [SSL] [LZO2] [eurephia] built on Mar 4 2013 Wed Mar 13 22:58:22 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Mar 13 22:58:22 2013 LZO compression initialized Wed Mar 13 22:58:22 2013 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Mar 13 22:58:22 2013 Socket Buffers: R=[196724->65536] S=[9216->65536] Wed Mar 13 22:58:22 2013 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Wed Mar 13 22:58:22 2013 Local Options hash (VER=V4): '41690919' Wed Mar 13 22:58:22 2013 Expected Remote Options hash (VER=V4): '530fdded' Wed Mar 13 22:58:22 2013 UDPv4 link local: [undef] Wed Mar 13 22:58:22 2013 UDPv4 link remote: 54.234.43.171:1194 Wed Mar 13 22:58:22 2013 TLS: Initial packet from 54.234.43.171:1194, sid=ffbaf343 d0c1a266 Wed Mar 13 22:58:22 2013 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:22 2013 VERIFY OK: nsCertType=SERVER Wed Mar 13 22:58:22 2013 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:23 2013 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:23 2013 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:23 2013 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:23 2013 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:23 2013 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Mar 13 22:58:23 2013 [ec2-server] Peer Connection Initiated with 54.234.43.171:1194 Wed Mar 13 22:58:25 2013 SENT CONTROL [ec2-server]: 'PUSH_REQUEST' (status=1) Wed Mar 13 22:58:25 2013 PUSH: Received control message: 'PUSH_REPLY,route 10.8.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: timers and/or timeouts modified Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: --ifconfig/up options modified Wed Mar 13 22:58:25 2013 OPTIONS IMPORT: route options modified Wed Mar 13 22:58:25 2013 ROUTE default_gateway=0.0.0.0 Wed Mar 13 22:58:25 2013 TUN/TAP device /dev/tun0 opened Wed Mar 13 22:58:25 2013 /sbin/ifconfig tun0 delete ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address Wed Mar 13 22:58:25 2013 NOTE: Tried to delete pre-existing tun/tap instance -- No Problem if failure Wed Mar 13 22:58:25 2013 /sbin/ifconfig tun0 10.8.0.6 10.8.0.5 mtu 1500 netmask 255.255.255.255 up Wed Mar 13 22:58:25 2013 /sbin/route add -net 10.8.0.0 10.8.0.5 255.255.255.0 add net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:58:25 2013 Initialization Sequence Completed ^CWed Mar 13 22:58:30 2013 event_wait : Interrupted system call (code=4) Wed Mar 13 22:58:30 2013 TCP/UDP: Closing socket Wed Mar 13 22:58:30 2013 /sbin/route delete -net 10.8.0.0 10.8.0.5 255.255.255.0 delete net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:58:30 2013 Closing TUN/TAP interface Wed Mar 13 22:58:30 2013 SIGINT[hard,] received, process exiting toto5:ttntec2 Dominic$ sudo openvpn2 --config client.conf --remote ec2-54-234-43-171.compute-1.amazonaws.com Wed Mar 13 22:58:57 2013 OpenVPN 2.2.1 x86_64-apple-darwin12.2.0 [SSL] [LZO2] [eurephia] built on Mar 4 2013 Wed Mar 13 22:58:57 2013 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Mar 13 22:58:57 2013 LZO compression initialized Wed Mar 13 22:58:57 2013 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Mar 13 22:58:57 2013 Socket Buffers: R=[196724->65536] S=[9216->65536] Wed Mar 13 22:58:57 2013 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Wed Mar 13 22:58:57 2013 Local Options hash (VER=V4): '41690919' Wed Mar 13 22:58:57 2013 Expected Remote Options hash (VER=V4): '530fdded' Wed Mar 13 22:58:57 2013 UDPv4 link local: [undef] Wed Mar 13 22:58:57 2013 UDPv4 link remote: 54.234.43.171:1194 Wed Mar 13 22:58:57 2013 TLS: Initial packet from 54.234.43.171:1194, sid=a0d75468 ec26de14 Wed Mar 13 22:58:58 2013 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:58 2013 VERIFY OK: nsCertType=SERVER Wed Mar 13 22:58:58 2013 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funst ... ost.domain Wed Mar 13 22:58:58 2013 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:58 2013 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:58 2013 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Mar 13 22:58:58 2013 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Mar 13 22:58:58 2013 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Mar 13 22:58:58 2013 [ec2-server] Peer Connection Initiated with 54.234.43.171:1194 Wed Mar 13 22:59:00 2013 SENT CONTROL [ec2-server]: 'PUSH_REQUEST' (status=1) Wed Mar 13 22:59:00 2013 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route 10.8.0.0 255.255.255.0,topology net30,ping 10,ping-restart 120,ifconfig 10.8.0.6 10.8.0.5' Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: timers and/or timeouts modified Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: --ifconfig/up options modified Wed Mar 13 22:59:00 2013 OPTIONS IMPORT: route options modified Wed Mar 13 22:59:00 2013 ROUTE default_gateway=0.0.0.0 Wed Mar 13 22:59:00 2013 TUN/TAP device /dev/tun0 opened Wed Mar 13 22:59:00 2013 /sbin/ifconfig tun0 delete ifconfig: ioctl (SIOCDIFADDR): Can't assign requested address Wed Mar 13 22:59:00 2013 NOTE: Tried to delete pre-existing tun/tap instance -- No Problem if failure Wed Mar 13 22:59:00 2013 /sbin/ifconfig tun0 10.8.0.6 10.8.0.5 mtu 1500 netmask 255.255.255.255 up Wed Mar 13 22:59:00 2013 /sbin/route add -net 54.234.43.171 0.0.0.0 255.255.255.255 add net 54.234.43.171: gateway 0.0.0.0 Wed Mar 13 22:59:00 2013 /sbin/route add -net 0.0.0.0 10.8.0.5 128.0.0.0 add net 0.0.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 /sbin/route add -net 128.0.0.0 10.8.0.5 128.0.0.0 add net 128.0.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 /sbin/route add -net 10.8.0.0 10.8.0.5 255.255.255.0 add net 10.8.0.0: gateway 10.8.0.5 Wed Mar 13 22:59:00 2013 Initialization Sequence Completed Wed Mar 13 22:59:00 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:00 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:01 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) Wed Mar 13 22:59:02 2013 write UDPv4: No route to host (code=65) ... The routing table after a connection WITHOUT the push redirect-gateway (all traffic is not redirected to the VPN and everything is working fine, I can ping or ssh the OpenVPN server and access all other Internet resources through my default gateway): Destination Gateway Flags Refs Use Netif Expire default user148-1.wireless UGSc 50 0 en1 10.8/24 10.8.0.5 UGSc 2 7 tun0 10.8.0.5 10.8.0.6 UH 3 2 tun0 127 localhost UCS 0 0 lo0 localhost localhost UH 6 6692 lo0 client.openvpn.net client.openvpn.net UH 3 18 lo0 142.1.148/22 link#5 UCS 2 0 en1 user148-1.wireless 0:90:b:27:10:71 UHLWIir 50 0 en1 76 user150-173.wirele localhost UHS 0 0 lo0 142.1.151.255 ff:ff:ff:ff:ff:ff UHLWbI 0 2 en1 169.254 link#5 UCS 1 0 en1 169.254.255.255 0:90:b:27:10:71 UHLSWi 0 0 en1 71 The routing table after a connection with the push redirect-gateway option enable as in the server.conf file above (all internet traffic should be redirected to the VPN tunnel, but nothing is working, I can't access any Internet ressources at all): Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 1 0 tun0 default user148-1.wireless UGSc 7 0 en1 10.8/24 10.8.0.5 UGSc 0 0 tun0 10.8.0.5 10.8.0.6 UHr 6 0 tun0 54.234.43.171/32 0.0.0.0 UGSc 1 0 en1 127 localhost UCS 0 0 lo0 localhost localhost UH 3 6698 lo0 client.openvpn.net client.openvpn.net UH 0 27 lo0 128.0/1 10.8.0.5 UGSc 2 0 tun0 142.1.148/22 link#5 UCS 1 0 en1 user148-1.wireless 0:90:b:27:10:71 UHLWIir 1 0 en1 833 user150-173.wirele localhost UHS 0 0 lo0 169.254 link#5 UCS 1 0 en1 169.254.255.255 0:90:b:27:10:71 UHLSW 0 0 en1

    Read the article

  • How can I create a simple Exchange 2010 backup solution?

    - by bduncanj
    I'm sure this question's been asked a dozen times in one form or another, however after much searching, there doesn't appear to be an obvious simple recovery solution for a single Exchange box. We're using Exchange 2010 on a single server, the server hosts the AD and nothing else on the network uses the AD. The intent is to run this server as you would an externally hosted Exchange server - access only via HTTP (RPC mode or OWA) - all other ports blocked. I've a daily backup running, using Windows Server 2008 volume shadow service to backup the Exchange data to an external hard disk. My question is, how do I perform a bare metal recovery of this server? 1) Do I need to be explicitly including the active directory information in this nightly backup, or will it be there by virtue of the fact that this system is the primary AD server and the Windows backup service knows this? 2) I understand I can re-install Server 2008 onto my new hardware (in the case of hardware failure) and then run Exchange 2010 setup.exe with a /recover argument, referencing the backup volume. 3) It is acceptable to have some downtime during this recovery process. But is there anything else I should be aware of? Thanks! Duncan

    Read the article

  • Full-text search locks up database - error 0x8001010e

    - by Stewart May
    Hi We have a full-text catalog that is populated via a job every 15 minutes like so: ALTER FULLTEXT INDEX ON [dbo].[WorkItemLongTexts] START INCREMENTAL POPULATION We have encountered a problem where the database containing this catalog locks up. There are a couple of scenarios, we either see the job execute and the process hang with with a wait type of UNKNOWN TOKEN, or we see another process hang with a wait type of MSSEARCH. Once this happens the job continues to run but informs us that the request to start a full-text index population is ignored because a population is currently active. Looking in the full text log files we see the following error each time these problems occur: 2010-04-21 08:15:00.76 spid21s The full-text catalog health monitor reported a failure for full-text catalog "XXXFullTextCatalog" (5) in database "YYY" (14). Reason code: 0. Error: 0x8001010e(The application called an interface that was marshalled for a different thread.). The system will restart any in-progress population from the previous checkpoint. If this message occurs frequently, consult SQL Server Books Online for troubleshooting assistance. This is an informational message only. No user action is required."'' The only solution is to restart the SQL Server service and then the full text service. This is now occuring on a daily basis now so any help would be appreciated.

    Read the article

  • Laptop seemingly randomly "freezes" to the point of no longer executing applications

    - by Aierou
    After upgrading to Windows 8 pro on my Samsung Series 7 Chronos NP700Z5C-S04US (may be relevant, I'm not sure), my computer began to stop allowing the execution of any service or application, as well as discontinuing the update of the clock until a hard shutdown was performed. This seems to occur randomly after periods of inactivity and I've no idea the cause. These are measures I have already taken in order to attempt to stop this: -Obviously Googling potential answers to this problem -Updating all drivers -Researching all events that have occurred around the time of the failure to respond (with no results) -I tried applying "bcdedit /set disabledynamictick no" which was a hotfix for what seemed to be the same error but was not. Here is some more, potentially related, information about the error: -No BSOD (actually, I haven't at all experienced a BSOD with Windows 8) -Computer seems to have a problem shutting down/restarting most of the time (Hangs at the point where it should completely turn off) -New sound instances are not able to play, but previously loaded containers function properly -As mentioned before, the clock freezes at the time of the error -USB devices function properly -Servers that I was running fail to respond on my end, but stay online. If you require more information, please request it specifically and I will be happy to oblige. Thanks.

    Read the article

  • How do you enable webcam support in Facebook for Ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in Ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated!

    Read the article

  • Configuring PAM with pam_mount; getting a dlopen() with an HX_Init error

    - by Jamie
    I'm trying to get automounting upon login working on Ubuntu 10.03 Beta 2. I didn't find a package for pam_mount, so I ended downloading it and building it. This required: sudo apt-get install build-essential pkg-config libxml2-dev libssl-dev libpam-dev Additionally, the libHX-dev is required but as of yesterday (23/4/2010) the package version provided (3.2) wasn't up to snuff (3.4) so I downloaded, compiled and installed that too. cd ./pam_mount-1.36/ && ./configure && make && sudo make install When I tried it (pam_mount) I got this in my auth log: Apr 23 12:18:02 ubuntu sshd[1195]: PAM unable to dlopen(/lib/security/pam_mount.so): /lib/security/pam_mount.so: undefined symbol: HX_init Apr 23 12:18:02 ubuntu sshd[1195]: PAM adding faulty module: /lib/security/pam_mount.so Apr 23 12:18:06 ubuntu sshd[1195]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.20.182 user=jrisk Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): getting password (0x00000388) Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): pam_get_item returned a password Apr 23 12:18:06 ubuntu sshd[1195]: pam_winbind(sshd:auth): user 'jrisk' granted access Apr 23 12:18:06 ubuntu sshd[1195]: Accepted password for jrisk from 192.168.20.182 port 4369 ssh2 Apr 23 12:18:06 ubuntu sshd[1195]: pam_unix(sshd:session): session opened for user jrisk by (uid=0) What do I need to do get HX_Init into the system? This is related to an answer I previously got here.

    Read the article

  • Cloning a failing disk (Win 7)

    - by daveh551
    I have a Windows 7 machine with several partitions on a 1.5T drive. Windows has been complaining about disk errors and imminent failure, so I have purchased a new 2TB drive. The failing disk has not completely failed, and, in fact, I was able to boot Windows from it (after a couple tries) and examine the SMART logs - the only RED item was 1 sector being reallocated. But when I try to Clone it to the new Drive using Acronis True Image Home (2010), True Image can see the drive, the partitions, and the contents, but when it goes to actually do the clone, it says "Failed to move. Make sure the destination disk is not smaller than the source disk, and that there are not errors on the disk" (or something like that). What are some other options for simply cloning the failing drive. I'd like to clone the entire disk, but am willing to do it partition by partition if necessary. Was this a known failing of the 2010 edition of ATI, or is it really something hosed in my system. Would upgrading to the 2012 edition be likely to work any better? (I'd download the trial and try it out, but if I remember right, the cloning operation is disabled in the trial version), and I don't have enough free disk space to make an entire image.) What are some other cloning software packages if ATI won't work? Note that I'm only looking to clone the disk, not make an image as a back up - I use Ghost for that, and can fall back to that if I have to. It looks to me like CloneZilla would do the job. Any recommendations? Thanks, and if this duplicates other questions, I apologize.

    Read the article

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • Reusing Raid 5 Drive?

    - by User125
    We have two servers (ML530 G2 and DL380G2) w/ identical HP 10K RPM SCSI drives w/ a raid 5. One is decommissioned and the other will be decommissioned shortly. However, one of the drives on the production server had a drive failure. My hope was to take one of the drives from the decommissioned server and pop it into the production server. Both are running RAID 5. I broke the array on the decomm. server. To my knowledge, that should have wiped out all the volume and partition information. However, I do not know if it is safe to then take a drive from the decomm'ed server and replace the failed drive. Will the existing array see it as a replacement drive, wipe it and rebuild? Or will it fail because it was used in an array before. Are there any remnant data that resides on the drives after deleting a raid 5 array? These servers are 10-15 years old, so we're just trying to keep them alive until we decommission it. I'm not looking to pay a premium to find a vendor that still sells replacement drives for this system.

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >