Search Results

Search found 46833 results on 1874 pages for 'distributed system'.

Page 781/1874 | < Previous Page | 777 778 779 780 781 782 783 784 785 786 787 788  | Next Page >

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

  • Postfix spool on ext3 optimiziations in >=linux-2.6.34 days

    - by Luke404
    Given the very specific nature of the subject (we're not talking about mailboxes, just the spool; we're not talking about other filesystems, just ext3; and so on...) and the maturity of the softwares involved (linux kernel, ext3fs, postfix) I'd think there should be a more or less agreed on set of best practices to filesystem related tuning. I'm trying to get a roundup of them: data=journal became the default in recent kernels (somewhere around 2.6.30 IIRC) so we should be ok with that Wietse Venema says atime must be on, but Postfix documentation recommendsnoatime while talking about the Incoming Queue. Does that mean that postfix needs atime on just for some queue directories and will benefit from noatime on the others? can we use noatime if we just don't use ETRN? filesystem can be mounted nodev,noexec,nosuid - no* won't prevent you from setting attributes (postfix uses exec attr) they just won't have any effect (we don't run anything from the spool) the fsync() issue cited by Wietse and/or the chattr -S are probably linked to sync/async options of ext3fs but I do not understand them enough. Mouting the filesystem with async option is equivalent to chattr -R -S the whole fs? Seems like it will increase performance, but will that pose a risk of "loss of mail after a system crash" or is it really "safe on /var/spool/postfix" ? would you tune anything else on postfix-2.6.x to work better on ext3 or do you leave defaults everywhere? is there a "best" linux I/O scheduler for this kind of workload (namely CFQ or deadline?) or that's something that will vary too much based on hardware configuration? would you tune anything else in the filesystem or in the kernel? anything else? References: Postfix Performance here on SF Postfix documentation about the Incoming Queue Wietse Venema in Best file system on [email protected] here Postfix and ext3 on [email protected] here and there

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • Tracking down source of duplicate email messages in Outlook / Exchange environment

    - by Ken Pespisa
    I have a few users, who are also Blackberry users, that occasionally have duplicate emails generated from their "mailbox". I put mailbox in quotes because I'm not exactly sure where the duplicates are created. One of these users is in non-cached mode, and the other is in cached mode, and both experience the problem. In fact, the non-cached mode user was originally experiencing the problem while in cached mode, and I made the switch a few weeks ago to attempt to solve the problem. Today I discovered the issue still exists. I'm not sure if the fact that they are blackberry users could be causing the problem at all. I don't see how, but felt I should mention it anyway. Does anyone have ideas on how I might begin to troubleshoot this? I can see in the non-cached user's mailbox "Sent Items" that the message was sent only once. I confirmed the message does not state that there was a conflict and in fact that makes sense because they are in non-cached mode. On the server, we have a mail journaling feature turned on for our third-party mail archiving system, and I can see that that system sees two sent messages. And likewise, the recipient does in fact have two messages in their inbox with consecutive message IDs ([email protected]) and ([email protected]). It would seem to me that the duplicates are generated on the client, but is there a way to tell for sure?

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • No apparent reason for high load average

    - by Oz.
    We have several web servers running on Amazon (ec2) c1.xlarge, over Amazon AMI. The servers are duplicates of each other, running the exact same hardware and software. Each server spec is: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge A couple of weeks ago we have run a yum upgrade on one of the servers. Starting on this upgrade the upgraded server started showing a high load average. Needless to say, we did not update the other servers and we can not do so until we understand the reason for this behavior. The strange thing is that when we compare the servers using top or iostat, we can not find the reason for the high load. Note that we have moved traffic from the "problematic" server to the others, which have made the "problematic" server less crowded in terms of requests, and still his load is higher. Do you have any idea what could it be, or where else can we check? Many thanks for the help! Oz. # # proper server # w command # 00:42:26 up 2 days, 19:54, 2 users, load average: 0.41, 0.48, 0.49 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/1 82.80.137.29 00:28 14:05 0.01s 0.01s -bash pts/2 82.80.137.29 00:38 0.00s 0.02s 0.00s w # # proper server # iostat command # Linux 3.2.12-3.2.4.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 9.03 0.02 4.26 0.17 0.13 86.39 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 1.63 1.50 55.00 367236 13444008 xvdfp1 4.41 45.93 70.48 11227226 17228552 xvdfp2 2.61 2.01 59.81 491890 14620104 xvdfp3 8.16 14.47 94.23 3536522 23034376 xvdfp4 0.98 0.79 45.86 192818 11209784 # # problematic server # w command # 00:43:26 up 2 days, 21:52, 2 users, load average: 1.35, 1.10, 1.17 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT pts/0 82.80.137.29 00:28 15:04 0.02s 0.02s -bash pts/1 82.80.137.29 00:38 0.00s 0.05s 0.00s w # # problematic server # iostat command # Linux 3.2.20-1.29.6.amzn1.x86_64 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 7.97 0.04 3.43 0.19 0.07 88.30 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn xvdap1 2.10 1.49 76.54 374660 19253592 xvdfp1 5.64 40.98 85.92 10308946 21612112 xvdfp2 3.97 4.32 93.18 1087090 23439488 xvdfp3 10.87 30.30 115.14 7622474 28961720 xvdfp4 1.12 0.28 65.54 71034 16487112

    Read the article

  • RAID6 mdraid -> LVM -> EXT4 root with GRUB2?

    - by Rotonen
    2012-03-31 Debian Wheezy daily build in VirtualBox 4.1.2, 6 disk devices. My steps to reproduce so far: Setup one partition, using the entire disk, as a physical volume for RAID, per disk Setup a single RAID6 mdraid array out of all of those Use the resulting md0 as the only physical volume for the volume group Setup your logical volumes, filesystems and mount points as you wish Install your system Both / and /boot will be in this stack. I've chosen EXT4 as my filesystem for this setup. I can get as far as GRUB2 rescue console, which can see the mdraid, the volume group and the LVM logical volumes (all named appropriately on all levels) on it, but I cannot ls the filesystem contents of any of those and I cannot boot from them. As far as I can see from the documentation the version of GRUB2 shipped there should handle all of this gracefully. http://packages.debian.org/wheezy/grub-pc (1.99-17 at the time of writing.) It is loading the ext2, raid, raid6rec, dosmbr (this one is in the list of modules once per disk) and lvm modules according to the generated grub.cfg file. Also it is defining the list of modules to be loaded twice in the generated grub.cfg file and according to quick Googling around this seems to be the norm and OK for GRUB2. How to get further by getting GRUB2 to actually be able to read the content of the filesystems and boot the system? What am I wrong about in my assumptions of functionality here? EDIT (2012-04-01) My generated grub.cfg: http://pastie.org/3708436 It seems it first makes my /usr logical volume the root and that might be source of the failure? A grub-mkconfig bug? Or is it supposed to get access to stuff from /usr before / and /boot? /boot is on / for me - no separate boot logical volume.

    Read the article

  • prevalent, recurring hardrive failure intel macbook from 2006/2007

    - by SimonSalman
    Hi, Long story: My MacBook's hard drive failed one year ago, just a month after its warranty ended or: a year and a month after I bought it. After about ten phone calls to Apple's service, they agreed to extend the warranty for another year, so that I got it replaced free of charge. In the mean time, I got to know that many MacBook users that experience/report hard drive failures. Every reported crash was preceded by a slowdown of system performance, an increased occurrence of the spinning beach ball wait courser, and frequent crashes of applications that used to run very robust until then. It happened (as far as I know) with MacBooks from 2006/2007. All these MacBooks additionally suffer from a recurring wearing down of their "top case". Many heavy users had to replace their HDDs three time since 2006/2007 resulting in an head crash, making it impossible to recover data (diagnosis of recovery specialists) in most but not all cases HDD was Seagate (doesn't necessarily have to be the cause, if majority of the MacBook charge contained Seagate drives) And right now (one year after my first disk crash), these symptoms are cumulating on my system, again ... Short version: prevalent hard drive failure on MacBook charge from 2006/2007 (i.e. 2.16 GHz Intel Core 2 Due) I am looking for any (preferable open source) tool for checking the hard drive condition, especially to detect the known "MacBook problem". So, that I can replace the disk on time. If any Mac user found a solution to prevent the repeated failure of heir hard drive, I would be very glad to get to known it. I really enjoy my old MacBook, but I hate to get interrupted every year by an HDD crash. BTW, the issue is already in discussion for a long time, but there seems to be no solution, so far. Thanks, Simon

    Read the article

  • Is Gmail Being Blocked by my ISP (wait till you read this)?

    - by James
    This is the strangest thing I have ever encountered. I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). So okay, maybe my ISP is blocking these for some reason or maybe my firewall is, or maybe there is something wrong with my connectivity, right? NO. On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail ages starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping a complete reinstall would solve the issue, but no luck. So can anyone for the life of them figure out why this could be? And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cabel from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before gmail was perfectly accessible in this computer. I can't remember anything special that happened on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter

    Read the article

  • How to read cell data in excel and output to command prompt

    - by Max Ollerenshaw
    Hi All, I'm a sys admin and I am trying to learn how to use powershell... I have never done any type of scripting or coding before and I have been teaching myself online by learning from the technet script centre and online forums. What I am trying to accomplish is to open an excel spreadsheet get information from it (usernames and password) and then output it into the command prompt in powershell. When ever I try to do this I get an Exception calling "InvokeMember" anyway, here is the code I have so far: function Invoke([object]$m, [string]$method, $parameters) { $m.PSBase.GetType().InvokeMember( $method, [Reflection.BindingFlags]::InvokeMethod, $null, $m, $parameters,$ciUS ) } $ciUS = [System.Globalization.CultureInfo]'en-US' $objExcel = New-Object -comobject Excel.Application $objExcel.Visible = $False $objExcel.DisplayAlerts = $False $objWorkbook = Invoke $objExcel.Workbooks.Open "C:\PS\User Data.xls" Write-Host "Numer of worksheets: " $objWorkbook.Sheets.Count $objWorksheet = $objWorkbook.Worksheets.Item(1) Write-Host "Worksheet: " $objWorksheet.Name $Forename = $objWorksheet.Cells.Item(2,1).Text $Surname = $objWorksheet.Cells.Item(2,2).Text Write-Host "Forename: " $Forename Write-Host "Surname: " $Surname $objExcel.Quit() If (ps excel) { kill -name excel} I have read many different posts on forums and articles on how to try and get around the en-US problem but I cannot seem to get around it and hope that someone here can help! Here is the Exeption problem I mentioned: Exception calling "InvokeMember" with "6" argument(s): "Method 'System.Management.Automation.PSMethod.C:\PS\User Data.x ls' not found." At C:\PS\excel.ps1:3 char:33 + $m.PSBase.GetType().InvokeMember <<<< ( + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException Numer of worksheets: You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:18 char:45 + $objWorksheet = $objWorkbook.Worksheets.Item <<<< (1) + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull Worksheet: You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:21 char:37 + $Forename = $objWorksheet.Cells.Item <<<< (2,1).Text + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:22 char:36 + $Surname = $objWorksheet.Cells.Item <<<< (2,2).Text + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull Forename: Surname: This is the first question I have ever asked, try to be nice! :)) Many Thanks Max

    Read the article

  • How to cache authentication in Linux using PAM/Kerberos authentication (for CVS)?

    - by Calonthar
    We have several Linux servers that authenticate Linux user passwords on our Windows Active Directory Server using PAM and Kerberos 5. The Linux distro we use is CentOS 6. On one system, we have several Version Control Systems like CVS and Subversion, both of which authenticate users throug PAM, such that users can use their normal Unix resp. Windows AD accounts. Since we started using Kerberos for password authentication, we experienced that CVS on a client machine is often much slower in establishing a connection. CVS authenticates the user on every request (eg. cvs diff, log, update...). Is is possible to cache the credentials that kerberos uses, sucht that is does not need to ask the Windows AD server every time a user executes a cvs action? Our PAM config /etc/pam.d/system-auth looks like the following: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth required pam_deny.so account required pam_unix.so broken_shadow account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • App for family tech support tracking?

    - by slothbear
    I do tech support for several groups within my family. They usually have a document or notebook of questions for me. They often record my advice, but then ask me again later. Some communications are by email (nice record for me, although they never think to search). Some sessions are in person, usually with a followup email from me for the record. Which they forget about. I'm not trying to force them to be more 'professional', but I would like to streamline my support a bit, and give them a place to look for past answers. Some of them would like a standard place like that, rather than reasking me the same questions. The solution has to be free. And web-based, although email-in for questions would be great. I'll be doing most (all?) updating of the system. Mobile/iPhone access would be nice, but not required. Ideally, a system with topics and responses would be good, but I'd need a way to promote one response as 'the answer'.

    Read the article

  • OpenVPN (Tunnelblick) Suddenly Dropping Constantly

    - by Jeremy Privett
    I've been using Tunnelblick on my Mac for OpenVPN for about a year now. All of a sudden, this morning, it decided that it was going to take a nasty turn for the worse with no explanation. Here are the symptoms I'm seeing: I can connect to the VPN fine, initially. After about 2 - 5 minutes of no interruption, the connection suddenly dies. I can still see the VPN route using netstat -rn, and Tunnelblick believes it's still connected. No VPN traffic can go through and I can't even ping the VPN gateway. Eventually, Tunnelblick will catch on that the connection has died (usually about 5 - 10 minutes later) and shoot itself to restart and then the cycle starts over again. I've tried everything I can think of to figure this one out. I've completely flushed my system by rebooting and removing Tunnelblick and all traces of OpenVPN from my system and re-installing from scratch. No dice, same problem. I'm at my wits end, because I desperately need to get this fixed as the VPN is required for me to be able to do my job. Any ideas you have would be greatly appreciated.

    Read the article

  • Getting SMB file shares working over a PPTP VPN

    - by Ben Scott
    I'm having issues getting SMB file shares working over a PPTP VPN. The server setup consists of a security device (DrayTek V3300) which passes the PPTP authentication to a SBS2003 server running RRAS. The server is the DC and provides DNS and WINS, the single NIC's name server is set to 127.0.0.1, and DHCP on the DrayTek sets the server IP as the DNS. If I create a new VPN connection in Win7, leaving everything as default apart from the server, username, password and domain, I can: ping everything by IP address resolve IPs with nslookup using their fully-qualified name, as in nslookup fileserver.mydomain.local ping machines by fully-qualified name, as in ping fileserver.mydomain.local However if I try to access a file share: within Explorer, I get "Windows cannot access ..." with "Error code: 0x80004005 Unspecified Error", using net use z: \\fileserver.mydomain.local\share, I get "System error 53 has occurred. The network path was not found." If I add the machine name to my HOSTS file I can use the file share, which is my last-ditch workaround, but I have a number of VPN users and would rather a solution that doesn't involve me trying to hand-edit system files on computers half a country away. If I set the WINS server explicitly in the connection's IPv4 settings I don't have to use the FQN to ping the machine, but that doesn't change anything else.

    Read the article

  • Cannot access server shares over VPN

    - by DuncanDavies
    I've set up a single hosted server to use as a development environment for a web-based application. The web app is served up fine on port 80, however I'm struggling to get my VPN to behave how I'd expect so the developers don't have the access they require. The VPN connects fine and I can access the back-end database (SQL Server) which resides on the server with the client tools from the laptops. However they cannot access any shared folders. The server's local IP address is 10.x.x.x, and I've assigned a static IP address pool to RRAS (of 192.168.100.1 - 20). The clients pick up a valid IP Address (i.e. 192.168.100.9) when they connect. There is no name resolution setup, DNS or WINS. When connected via VPN the clients can ping the server (192.168.100.1) by IP Address, but cannot map a drive to a shared folder (net use * \\192.168.100.1\xxxxx) - I get 'System error 53 has occurred. The network path was not found.' I don't understand why I can ping by the ip, but not map by it. Some details: Server OS is Windows 2008 (Datacenter) VPN is SSTP using RRAS Clients are all Windows 7 I've tried temporarily disabling the firewalls So, why can we not access the file system when everything else (ping, RDP, SQL Server clients tools) works? Thanks for your help Duncan

    Read the article

  • Which version control should I use for my configuration files?

    - by rakete
    I want to store some of my configuration files (~/.emacs.d/, .Xdefaults, etc. linux $HOME stuff) in version control so I can easily sync them with my notebook/workplace and see my past changes and revert to them should the need arise. So far it seems to me that there are quite some people using git for this and I think that I too want to use a distributed vcs for this (if only to get more used to them) but I can't say that I am very experienced with all things dvcs. I did use darcs and git briefly and so far I can say that I really like the way git handles branches, and I think the possibility to have different branches within the same directory is especially useful for my use case. Darcs on the other hand has cherry picking of patches, which too is quite the convenient feature when managing configuration files (at least I assume it is). So, what would you recommend to use? And what would be your reasoning for your recommendation? What other vcs with nice feature that I haven't mentioned exist and would make a good vcs to store configuration files and why?

    Read the article

  • IIS7: How to block access with a web.config file?

    - by neves
    I know that IIS7 allows me to have a per directory configuration with the web.config xml file. I have a directory with some configuration files that don't want to be web accessible. A local web.config file forbidding read access to it would be a nice solution. What should be the contents of a web.config file to forbid web access to the files? Edit: I'm trying to put a web.config file with these contents in a file: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <authorization> <deny users="*" /> <!-- Denies all users --> </authorization> </system.web> </configuration> But I can still directly access a file inside the directory. What's wrong with it? How do I debug what's happening?

    Read the article

  • Installing Windows 7 over PXE, preferably with domain autojoin

    - by Ivan Vucica
    At an educational non-profit, I've inherited a previously set-up Windows domain that, after the first reinstall of the machines, we ended up not using by simply not joining machines back into the domain. Over last summer, before the annual reinstall for shipping machines to the summer school, I toyed with the idea of installing Windows 7 over network, instead of just imaging the machines. It took a bit longer than I expected to figure out the basics; honestly, I expected that Windows would be more friendly for PXE installation out of the box. What I'm interested in is best practices for installing Windows 7 over PXE with domain autojoin. I'd love it if the whole setup could optionally be hosted on a UNIX based system as well. I've had some success by preparing an ISO using Windows Deployment Kit, and loading the ISO into memory. This was needed since I wanted a menu, and I think I couldn't get PXELINUX to chainload into Windows' bootloader. Unfortunately, I couldn't figure out much about customization of the Windows setup in that timeframe nor could I get Samba to work properly; studying the stuff ended up being too lengthy, especially the portion where I edited a disk image on Windows and copied it outside. WDK didn't make things easier by mounting the disk image into RAM, and writing it in its entirety when done with it, making me a very sad boy. I've recently found a different approach, too, that appears to be closer to Microsoft's original idea for netboot deployment and does not involve ISOs. So my question boils down to the following. What exact approach do you use for netbooting Windows 7 setup? How can Windows 7 setup be best customized to be completely unattended, including installation on specific system partition and not destroying the data partition, creation of passworded admin and default user, choice of MAC-address-based hostname, and joining a domain? As much details as possible for everyone's future reference would be appreciated. WDS isn't a bad choice, but if a Linux-based install can be used, that'd be better.

    Read the article

  • Some free cloud solution to enhance your business

    - by Saif Bechan
    I am co-owner of a small internet business. I am in charge of IT, and I try to get things done as low cost as possible. When investing in servers, resources and overall business costs your project can soon turn into a financial disaster. Cloud solutions can help you in solving some financial problems, they can help you in scalability problems, and overall performance problems of your server or web application. Recently I moved the whole internal/external communication(email,calendar,documents) of my business to the cloud. I did this by using the free version of Google Apps. This works great and is a big advantage on multiple levels. I do not have to fight spam anymore on my system, and there are less resources used on my system. Also switching servers will go a lot easier. Questions Can you name some cloud solution that you have used, or some you just recommend. They can fairy form financial benefits, organizational benefits, performance benefits. It doesn't matter as soon as it helps you spread the load of your business.

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • How can you connect to a SQL Server not on your domain?

    - by scotty2012
    I have a test machine that's not allowed on our domain because we are testing corporately unsupported applications (SQL 2008 and Server 2008). I want to use management studio to connect to the SQL2008 server but can't get it working. I have authentication set to mixed-mode, I've checked 'allow remote connections to this server', but when I try to access it, I get the error A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53) Since it says the provider is Named Pipes, I enabled Named Pipes on the server, but still no dice. I've tried connecting to the system name, the IP, the system name\instance and IP\instance, all to no avail. Is what I'm trying to do not possible? Edit: Well, through some basic troubleshooting, I've found that I can't ping the server from my client computer, but I can ping the client computer from the server? They are both plugged into the same switch, and are sitting next to each other. The windows firewall on the server is turned on, is there some specific settings I need to enable? DAH! So it was the firewall blocking me. How can I enable the firewall and still connect?

    Read the article

  • my X server doesn't load a module called "glx", but my video drivers seem to be installed

    - by rumtscho
    I just got a new, very wide monitor (2560x1440) and there is no sense maximizing my applications. So I installed Compiz Config Settings Manager and enabled Grid. Nothing happened, the shortcuts don't move application windows. Went to System - Preferences - Appearance, the Visual effects tab. It at "disabled". When I try to set them to "normal" or "extra", a message box appears telling me that it's searching for video drivers, then disappears, and I get an error message "Desktop effects could not be enabled". I opened Xorg.0.log, and had errors there: (EE) Failed to load /usr/lib/xorg/modules/extensions//libglx.so (II) UnloadModule: "glx" (EE) Failed to load module "glx" (loader failed, 7) (II) LoadModule: "extmod" Going to Administration - System - Hardware drivers, it said that there are no available and/or installed hardware drivers. But apt-get said that it cannot install nvidia-glx-185, as it is already installed. Googling my error message suggested that I install and run something called envyng. This let me install the nvidia drivers again, and now I can see in the Hardware Drivers window that they are installed and active. But the error message in Xorg.0.log remains, and I still cannot enable the Compiz effects or use Grid. Now, I don't have enough Linux experience to understand if this is a single cause-effect-chain of problems, or three independent ones, but I'd appreciate help for any of them. I am running Ubuntu 9.10, the video card is a GeForce 7600GS.

    Read the article

  • gcc built by crosstool-ng gives undefined reference

    - by netvope
    I've successfully built a toolchain using crosstool-ng with the default configuration named x86_64-unknown-linux-gnu. The documentation says: Using the toolchain is as simple as adding the toolchain's bin directory in your PATH, such as: export PATH="${PATH}:/your/toolchain/path/bin" and then using the target tuple to tell the build systems to use your toolchain: ./configure --target=your-target-tuple or make CC=your-target-tuple-gcc or make CROSS_COMPILE=your-target-tuple- and so on... I followed the instructions and attempted to build GNU tar (tar-1.25.tar.bz2) with the toolchain. The commands ./configure --target=x86_64-unknown-linux-gnu and make CROSS_COMPILE=x86_64-unknown-linux-gnu- do not work (the build will succeed, but it uses the host system's gcc). The command make CC=x86_64-unknown-linux-gnu-gcc works, but in the very last step when it tries to link, it returns errors like this: compare.o: In function `openat': /dev/shm/x-tools/x86_64-unknown-linux-gnu/x86_64-unknown-linux-gnu/sys-root/usr/include/bits/fcntl2.h:134: undefined reference to `__openat_2' What could be the problem? Was the toolchain not properly setup? Perhaps x86_64-unknown-linux-gnu-gcc is using the header files from the host system but could not find the libraries in the target's sys-root?

    Read the article

< Previous Page | 777 778 779 780 781 782 783 784 785 786 787 788  | Next Page >