Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 773/1030 | < Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • Set scheduled task last result to 0x0 manually

    - by Rogier
    Every night a task runs that checks if any scheduled task has a Last Result is not equal to 0x0. If a scheduled tasks has an error like 0x1, then automatically an e-mail is sent to me. As some tasks are running only weekly, and sometimes an error occurs which results in not equal to 0x0, every night an e-mail is sent with the error message, as the Last Result column still shows the last result of 0x1. But I would like to set the Last Result column to 0x0 manually if I solved a problem, so I won't get every night an e-mail with the error message. So is it possible to set the scheduled tasks Last Result to 0x0 manually (or by a script)? @harrymc. See located script underneath that is sending the e-mail. I can easily add a criteria to ignore result 0x1 (or another code), however this is not the solution as most of the times this result is a real error and has to be e-mailed. set [email protected] set SMTPServer=SMTPserver set PathToScript=c:\scripts set [email protected] for /F "delims=" %%a in ('schtasks /query /v /fo:list ^| findstr /i "Taskname Result"') do call :Sub %%a goto :eof :Sub set Line=%* set BOL=%Line:~0,4% set MOL=%Line:~38% if /i %BOL%==Task ( set name=%MOL% goto :eof ) set result=%MOL% echo Task Name=%name%, Task Result=%result% if not %result%==0 ( echo Task %name% failed with result %result% > %PathToScript%\taskcheckerlog.txt bmail %PathToScript%\taskcheckerlog.txt -t %YourEmailAddress% -a "Warning! Failed %name% Scheduled Task on %computername%" -s %SMTPServer% -f %FromAddress% -b "Task %name% failed with result %result% on CorVu scheduler %computername%" )

    Read the article

  • I need an admin toolset for Windows 2003 and 2008

    - by eugeneK
    i know this is way too general question but anyway. I need few tools, will write down my tasks as sysadmin and if you have any to automate my job i would be glad to hear. I don't mind paying for software needed unless it is way too expensive. First of i backup all files on server at local/office storage. I 7zip all SQL backup files and then move them over network to centralized location and then FTP them from office PC which has no FTP server installed and cannot have one. Backups happen at 4AM at the morning thus i need to set time for compressing and afterward FTPing. Then i FTP all IIS web application as differentiation backup, same goes for VOD movies. Second tool i need is system monitor which will monitor all servers from themselves and from external location for CPU/Memory/Hard disk and other basic failures. This tool should able to execute Website address with parameters which will send me an email with all report on failure. Third tool i need is a way to get all Event Logs from 10 Windows based servers without accessing each any of them manually. If you know any solution, thanks in advance.

    Read the article

  • Protocol to mount fat32 network filesystem on Linux with ability to lock files ( not advisory locks

    - by nagul
    I have a fat32 filesystem sitting on a NAS storage device (nslu2) that I need to mount on my Ubuntu system. I've tried Samba and NFS mounts, but both don't seem to support proper locking. More specifically, I am unable to save files to the mounted drive through GNUcash, KeepassX etc, which makes the share fairly useless. Is there a protocol that allows me to achieve this ? Note that the NAS storage device is running a linux OS so I can run pretty much any protocol that has a linux implementation. The only option I'm not looking for is to reformat the partition to ext3, which I'm not able to do due to other constraints. Alternatively, has anyone managed proper locking of a fat32 system over the network using Samba ? Or, is advisory locking the best you get with a network-mounted fat32 file system ? I've thought of trying sshfs but I've not found any indication that this will solve my problem. Edit: Okay, maybe I can reformat the drive, but to any file system except ext3. The "unslung" nslu2 doesn't like more than one ext3 drive, and I already have one attached. So any solution that involves reformatting the drive to ntfs, hfs etc is fine, as long as I can mount it on linux and lock files.

    Read the article

  • Juniper NetScreen NS-5GT traffic monitoring

    - by blah
    I've done casual research into the subject and am truly dismayed at the lack of compatible tools for such a simple task. Maybe someone can provide assistance. We have a NetScreen NS-5GT in the office. I need to be able to get a glance of current traffic per endpoint -- I think the equivalent of 'get sessions' with byte counts/rates. I don't care about bars, graphs, and reports. Something as simple as a classic software firewall display would be perfect. I can't shell out money on something real like SolarWinds products, so a free solution is essential. I'm willing to do a little work but refuse to program something from scratch. It's not prudent right now for me to install a hub or otherwise mess around physically. There must be something out there I can use, maybe in combination. I don't believe I'm asking too much. Specific answers only please, e.g. monitoring software you know will actually work with this antiquated device. I've read about general approaches to the broader problem dozens of times already.

    Read the article

  • Windows 8 - no internet connetction to some hosts while VPN is active

    - by HTD
    I use VPN to access the servers at work. When VPN is used, all network traffic to the Internet passes through my company network. It worked without any problems on Windows 7, now on Windows 8 some sites suddenly became inaccessible. Please note - I don't try to connect them over RDP, they are public Internet addresses, outside company network. They are inaccessible using any protocol. Ping returns "General failure.". I know it could be a misconfiguration on my company's server side, but it's very strange, since the same VPN connection used on Windows 7 works properly. What's wrong? Is it a Windows 8 bug, or is there something I could do on my company servers to make VPN work as expected with Windows 8? My company network works on Windows Server 2008 R2 and uses Microsoft TMG firewall. I couldn't find any rules blocking the traffic to mentioned sites, all network traffic for VPN users are passed through for all IPs and protocols. Any clues? UDPATE: Important - one whole day it worked. I hibernated and restarted the computer, connected and disconnected VPN - nothing could break my connection. Today it broke again, and restarting Windows didn't help. And now the solution: route add -p 0.0.0.0 MASK 255.255.255.255 192.168.1.1 Oh, OK, I know what it did, added my default gateway to routing table. But it still didn't work sometimes. So I removed my main network gateway route with: route delete -p 0.0.0.0 MASK 0.0.0.0 192.168.0.1 And added modified with: route add -p 0.0.0.0 MASK 255.255.255.255 192.168.0.1 And it works. Now. But I don't trust this. I don't know what really happened.

    Read the article

  • IIS 7.5 error 500 in fastcgi module after upgrading wordpress to 3.0.2

    - by Maniac13
    I am running multiple wordpress blogs on the following setup: Server 2008 R2; IIS 7.5; PHP 5.3.3; MySQL 5.0.7; I upgraded my wordpress install from 2.9.2 to 3.0.2 (on 2 different sites) today and the upgrade went fine. I can serve .php pages without errors, log into the admin system etc. I can browse my blog by going directly to mywebsite.com/index.php, but when I try to go to mywebsite.com (without the index.php) I get he below 500 error. I reset IIS, removed and re-attached the default document, but I am running out of ideas. Please if anyone has a solution for this that would be great. This is the 500 error I am getting: Error Summary HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. Detailed Error Information Module FastCgiModule Notification ExecuteRequestHandler Handler PHP FastCGI Error Code 0x00000000 Requested URL http://mywebsite.com:80/index.php Physical Path D:\mywebsite.com\index.php Logon Method Anonymous Logon User Anonymous Thanks Stephan

    Read the article

  • "Path Not Found" when attempting to write to a sub folder within a mapped drive

    - by Adam
    We have an interesting issue with one of our server shares, or possibly, our Win 7 desktops. When our users try to save files in a sub folder, either via copy/paste or through an application, to a mapped drive on our DC they receive an error saying "Path not found". They can however browse this folder and open files from it. This is where the "Path Not Found" error doesn't seem to stack up in my opinion. Users can however save files fine in the root folder of the mapped drive, it appears only to affect sub folders. It seems to be random which users and machines this affects. The users can log on to a different machine and be able to save in sub folders fine, on the same mapped drive. Event viewer hasn't been much help either. Currently, the only solution we have found is to image the machines affected which solves the issue. Our servers are Server 2008 R2 with Win 7 Pro desktops. Any help/pointers/suggestions would greatly be appreciated.

    Read the article

  • EC2: How dangerous is it to turn off fsck for EBS volumes?

    - by Janine
    I have been tearing my hair out trying to figure out why my EC2 instances (made from my own custom AMIs) were taking many tries to come up properly. They would fail with the following error: fsck.ext3: No such file or directory while trying to open /dev/sdf For both of the EBS volumes I was attaching during startup. Finally, I figured out the problem. I had put this in /etc/fstab: /dev/sdf /export ext3 defaults 1 2 /dev/sdi /export2 ext3 defaults 1 2 The 2 tells the system to fsck the drives on the way up. Changing this to /dev/sdf /export ext3 defaults 1 0 /dev/sdi /export2 ext3 defaults 1 0 Avoids the problem completely, but now the volumes are never going to be fsck'd. How much does this matter? Once the instance goes into production it's going to be running pretty much 24/7, so not many fscks would be happening anyway, but still... this just feels like a bad idea. I have not been able to find anyone else even reporting this problem (there are people with the same error message, but different causes). It seems unbelievable that I could be the only person to ever make this mistake, but perhaps I'm just talented that way. :) If there is another solution to the problem I would love to hear it; I have not been able to find one.

    Read the article

  • How do I get "Back to My Mac" (using MobileMe) from Windows?

    - by benzado
    I have a MobileMe subscription and a Mac at home with "Back to My Mac" enabled. When I'm away from home, this service lets me use another Mac to connect to my Mac back home and access file sharing, screen sharing, etc. As far as I know, the service doesn't use any proprietary protocols, so in theory I should also be able to get "Back to My Mac" from a Windows PC. This MacWorld article explains how it works. Basically, it uses Wide-Area Bonjour to give your Mac a domain name like hostname.username.members.mac.com. Remote computers can find your Mac using that address, then connect to it using a private VPN. The "Wide Area Bonjour" part seems to make it a little more complicated than simply a regular domain name, though. Note that I'm not interested in using the methods described by LifeHacker, which doesn't use the MobileMe service at all. I don't want to use a totally different dynamic DNS service. I'd like to use the one I'm already paying for, or at least find out why that's not possible from Windows. Also, my primary problem is finding a network route back to my mac... once I've got that I know how to enable services so that Windows can talk to it. UPDATE: Based on some additional research, it appears that Apple is only assigning IPv6 addresses to the hostname.username.members.mac.com names. So any solution will require enabling IPv6 support on Windows, if possible.

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Database types for customer analytics

    - by Drewdavid
    I am exploring a paid solution to start providing better embedded, dashboard-style analytics information to our website customers/account holders, but would like to also offer an in-house development option to our team. The more equipped I am with specifics (such as the subject of this question), the better the adoption rate from the team (or so I have found), regardless of the path we choose Would anyone care to summarize a couple of options for a fast and scalable database type through which we would provide the following: • Daily pageviews to a users account pages (users have between 1 and 1000 pages) • Some calculated/compounded metrics (such as conversion rate, i.e. certain page type viewed to contact form thank you page ratio) • We have about 1,500 members (will need room to grow); the number of concurrently logged in users will for the question's sake be 50 I ask because our developer has balked at providing this level of "over time" granularity (i.e. daily) due to the number of space it would take up in a MYSQL database To avoid a downvote I have asked specifically for more than one option, realizing that different people will have different solutions. I will make amendments to my question if so guided by answering parties Thank you for sharing your valued answers :)

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

  • How to stop/kill a virtual machine that hangs in "Stopping" state?

    - by SvetP
    Hi, I have a virtual machine that constantly hangs in the “Stopping” state. I’ve red several posts suggesting killing the vmwp.exe process of the machine but I’ve never been able to kill this process neither from the Windows Task Manager nor from an administrative command prompt by using prockill /PID xxxx /F where xxxx was the process ID. The only result that I have is that my machine enters in “Stopping-Critical” state. Even worse, from that point (having a virtual machine hung at stopping) I am unable to manage (stop or start) any other virtual machine on the same host. The only “solution” in that case for me is to stop the Virtual Machine Management Service (vmms.exe) and to restart the physical host. Without first stopping the vmms.exe service my physical host also hangs during the restart. Moreover, there is no any error logged in the Event Viewer. I’ve found some other posts complaining about them problem. On all of them the only suggestion was to kill the vmwp.exe process, which obviously doesn’t work for them too. Can somebody help us with this, pls? Thanks

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • HTML tabindex: Put some links last without complete enumeration

    - by Emanuel Berg
    I know I can use the HTML anchor attribute tabindex to set the tabindex of links, i.e., in what order they get focused when the user hits Tab (or Shift-Tab). But, I have a home page with tons of links, and to enumerate all those is a lot of work. The actual case is, I have four image links that by default gets index 1, 2, 3, and 4 (well, the behavior is equivalent, at least). But, I'd much rather have the first non-image link as number 1. Check it out here and you'll understand immediately. I tried to give the first non-image link (the link I desire to have tabindex 1) - I tried to give it tabindex 1 explicitly, hoping that it would cascade from there, but it didn't (i.e., the first image link got implicit tabindex 2). I also tried to give the image links ridiculously high tabindexes, but that didn't work: as the other links didn't have tabindexes at all, those highs were still "first". As a last resort (the solution currently employed) I gave the image links all tabindex -1. That makes for logical tabbing, but, it is suboptimal, as those image links are excluded from the tab loop - a user tabbing away will probably never realize that the images are clickable. I'd like them to be reachable with tabbing, but last, after all the ordinary links. If you wonder why I'm so determined to achieve this, it has to do with my own finger habits: I almost exclusively search for links, tab back, tab forth, etc., and very seldom using the mouse. Note: I'll accept a script to change the actual HTML for a complete enumeration, if you convince me there is no "set" way to solve this problem.

    Read the article

  • Google chrome not accepting any security certificates

    - by Jerry
    I've recently developed a problem with Google Chrome that's really annoying. I'm using Firefox at the moment with no problems whatsoever and it's the same with IE, so it's safe to say this problem is specific to Chrome. The problem is that it's not accepting security certificates from certain sites. I suppose the best place to start would be google itself. I can't search. The google search page will load but when I type some search term into the search box and hit 'search' I get the message: "You attempted to reach www.google.com, but the server presented an invalid certificate. You cannot proceed because the website operator has requested heightened security for this domain." No matter what the search term is, this is the result. Also when I try to log in to facebook - same message. Youtube works and many other sites that I know present security certs so I'm baffled. I've searched and there are other people who have had similar issues but I can't find a solution anywhere. The most common answer I'm picking up for this is to "check your system time" but I can safely say that it's not my system time. If anyone knows what is going on, I'd very much appreciate being informed. It's not super urgent as I can use Firefox to access those places Chrome won't, but it IS super annoying because I can usually sort out issues like this in no time.

    Read the article

  • Maxtor 500GB external hard drive not being detected but power is going to it?

    - by ClarkeyBoy
    I have 2 * Maxtor Onetouch 4 Lite 500GB external hard drives (part no. 9NT2A4-500). They both used to work fine on my old laptop (an Acer) but I have not used them for about a year, since my laptop was stolen and I got this one (also an Acer [Aspire 7738G]). I have one plugged into the mains with one of the leads I believe was supplied with them. It appears to be receiving power as it is warm and the power light (on the unit itself) is on; also the mains adapter is fairly warm. I also have it plugged into my laptop with a USB lead which I have tested on my mp3 player (so I know it works). However my hard drive is not showing on my computer. I have tried checking for new hardware, installing the software that was supplied with it, checking drive letters in case it is registered as C: or something stupid, checking for problems etc... I can't find any cause for it to do this. It does appear to be starting up and, possibly, shutting down and restarting constantly (that's what it sounds like altho I can't be certain). I have had both hard drives stored in different places for the last year and they're both doing the same thing.. if it was only one then I'd guess it had got damaged or corrupted or something but since it is both I doubt this is it. The only things in common with both of them are the leads and the laptop, however I know the USB lead works and guess the mains lead works as there is power going to the unit. Has anyone come across this before or does anyone have any idea what the cause / solution to the problem is? Any help would be greatly appreciated. Regards, Richard

    Read the article

  • VPN Error 691 but server says authenticated on server

    - by Andy
    Hello all, I have a problem with a vpn connection on Windows XP SP3 that appears to be related to an account (maybe privilleges or an option that I have missed). When connecting using my account, which is a domain administrator account it will connect to through the vpn fine. However, using an account created for another person they receive Error 691: Username or Password is not valid for this domain. On the domain controller (windows 2003) I see a logon successful message: User DOMAIN\user was granted access. Fully-Qualified-User-Name = int.company.net.au/People/Management/User NAS-IP-Address = 10.30.0.3 NAS-Identifier = not present Client-Friendly-Name = MelbourneCore Client-IP-Address = Router-ip Calling-Station-Identifier = not present NAS-Port-Type = Virtual NAS-Port = 77 Proxy-Policy-Name = Use Windows authentication for all users Authentication-Provider = Windows Authentication-Server = undetermined Policy-Name = Remote VPN Access Authentication-Type = MS-CHAPv1 EAP-Type = Does anyone have any ideas as to where else I should look for finding a solution? If i use the wrong password it gives a logon failure error in the event viewer. Also removing them from the remote access group gives a logon failure error. Nothing appears in the event viewer on the local machine. In the past all that is required is to add them into our Remote Access Users group. Any help?

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • Menu command stuck on screen

    - by 280Z28
    For some reason, periodically when I select a menu command, the command label gets "stuck" on the screen and won't go away. I can close all open applications, including whichever one I was using when it got stuck, but it still won't go away. In the screenshot below, I opened an new instance of IE just to show how the label stays on top. The label was not created by this instance of IE. Edit with the source: The label that gets stuck is the first menu command I select in IE. If a label is already stuck, a new one does not get stuck (regardless of which instance(s) of IE are involved). Based on this knowledge, I now just open IE on my secondary monitor, carefully open the context menu so the Properties command is in the bottom corner, and click it. This is not a solution... The label never moves and is transparent to mouse input (if I click it, it's as if I clicked the item behind it). The label does not go away if I close all running applications. I haven't tried stopping services or closing system tray items like Live Mesh. The label does go away if I change the screen resolution and then change it back. Any ideas how I can stop this from happening? It's happened a half dozen times since yesterday and it's becoming quite disrupting to my work. Obviously I added the circle in MS Paint. That part isn't stuck. ;)

    Read the article

  • MDT 2010 Litetouch.vbs Fails to Launch

    - by Mitch
    I have the custom image captured. Import the image and files. Prepare the customsettings.ini and the boot.ini to minimize the questions the deployment team will need to answer. Everything works like a charm on virtual machines but when I map to the scripts folder on the deployment share and double-click litetouch.vbs it creates the c:\minint folder, subfolders, and a couple of log files then nothing. Here's what the log files look like: <![LOG[Property LogPath is now = C:\MININT\SMSOSD\OSDLOGS]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Property CleanStart is now = ]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Microsoft Deployment Toolkit version: 5.1.1642.01]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[Property Debug is now = FALSE]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[GetAllFixedDrives(False)]LOG]!><time="15:54:28.000+000" date="03-08-2011" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> Anyone encounter this before or know what might be happening/not happening and can direct me in the right way? I've only found a couple of other references to this anywhere and they had no solution/cause listed either. I'm stumped.

    Read the article

  • Proper Network Infastructure Setup DMZ, VPN, Routing Hardware Question

    - by NickToyota
    Greetings Server Fault Universe, So here's a quick background. Two weeks ago I started a new position as the systems administrator for an expanding health services company of just over 100 persons. The individual I was replacing left the company with little to no notice. Basically, I have inherited a network of one main HQ (where I am situated) which has existed for over 10 years, with five smaller offices (less than 20 persons). I am trying to make sense of the current setup. The network at the HQ includes: Linksys RV082 Router providing internet access for employees and site to site VPN connecting the smaller offices (using an RV042 each). We have both cable and dsl lines connected to balance traffic (however this does not work at all and is not my main concern right now). Cisco Ironport appliance. This is the main gateway for our incoming and outgoing emails. This also has an external IP and internal IP. Lotus domino in and out email servers connected to the mentioned Cisco gateway. These also have an external IP and internal IP. Two windows 2003 and 2008 boxes running as domain controllers with DNS of course. These also have both an external IP and internal IP. Website and web mail servers also on both external and internal IPs. I am still confused as why there are so many servers connected directly to the internet. I am seriously looking to redesign this setup with proper security practices in mind (my highest concern) and am in need of a proper firewall setup for the external/internal servers along with a VPN solution about 50 employees. Budget is not a concern as I have been given some flexibility to purchase necessary solutions. I have been told Cisco ASA appliance may help. Does anyone out in the Server Fault Universe have some recommendations? Thank you all in advance.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

< Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >