Search Results

Search found 15591 results on 624 pages for 'problems'.

Page 552/624 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Is DHCP lease expriring years from now okay?

    - by sharptooth
    I'm reviewing Azure web role logs and there's output from ipconfig /all IPv4 Address. . . . . . . . . . . : 10.61.145.37(Preferred) . Subnet Mask . . . . . . . . . . . : 255.255.254.0. Lease Obtained. . . . . . . . . . : Monday, September 24, 2012 12:26:00 PM. Lease Expires . . . . . . . . . . : Thursday, October 31, 2148 6:55:12 PM. you see, the lease expires in year 2148 but my VM will likely not run for more than one month - when I deploy the new version of my code I first deploy it to new VMs, then switch traffic, then release the new VMs. In general such usage pattern is normal - VMs typically live from several dozen minutes to several weeks on Azure. I suspect the lease that long will cause problems on the internal Azure network sooner or later. Is such long DHCP lease okay or is it likely a misconfiguration?

    Read the article

  • Why can't I connect to remote Microsoft SQL Server through SSH tunnel?

    - by Alexander
    I have at home a D-Link DIR-615 C1 router with DD-WRT. I set up the SSH server on the router, and log on through an SSH2-RSA passphrase-protected key. That router is the gateway between the local network and the internet. One of the computers on that network has Microsoft SQL Server 2008 installed, with TCP/IP protocol enabled through port 1433. I've set up port forwarding on the router, so that remote connections are possible and are, in fact, working (some developers log on remotely without problems). I am part of another network, that has internet access through a proxy server, which only has ports 80 and 443 opened. I can't connect to that MSSQL server on that remote server because 1433 port is closed on this network. I connected (using Putty) through 443 port to my router's SSH server, and set up 2 tunnels. One is for RDP (3389), and it's working. The other is for 1433 port, to connect to the server. I can't connect through the SSH tunnel to the MS SQL Server, neither through telnet, or through GUI clients. Am I missing something? Additional details: on connect, I get this error from SQL Server Management Studio: TITLE: Connect to Server Cannot connect to localhost:14330. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 3) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=3&LinkId=20476 BUTTONS: OK The tunnel is configured like this: L14330 192.168.0.103:1433 192.168.0.103 is the permanent address of the SQL Server on the LAN. I also successfully forwarded TCP traffic of 3389 port to that IP, so tunneling is working to that IP address. When connecting without tunnel, through Microsoft SQL Server Management Studio, using the same method the connection establishes. Too bad my proxy doesn't allow 1433 port traffic, I wouldn't have this headache.

    Read the article

  • Kernel-mode Authentication: 401 errors when accessing site from remote machines

    - by CJM
    I have several Classic ASP sites that use Integrated Windows Authentication and Kerberos delegation. They work OK on the live servers (recently moved to a Server 2008/IIS7 servers), but do not work fully on my development PC or my development server. The IIS on both machines were configured through an IIS web deployment tool package which was exported from an old machine; the deployment didn't work perfectly, and I had to tinker a bit to get the sites working. When accessing the apps locally on either machine, they work fine; when accessing from another machine, the user is prompted by a username/password dialog, and regardless of what you enter, ultimately it results in a 401 (Unauthorised) error. I've tried comparing the configuration of these machines against similar live servers (that all work fine), and they seem generally comparable (given that none of the live servers are yet on IIS7.5 (Windows 7/Server 2008 R2). These applications run in a common application pool which uses a special domain user as it's identity - this user has similar permissions on the live and development machines. On IIS6 platforms, to enable kerberos delegation, I needed to set up some SPNs for this user, and they are still in place (even though I don't believe they are needed any longer for IIS7+ due to kernel-mode authentication), Furthermore, this account is enabled for Kerberos delegation in Active Directory, as is each machine I am dealing with. I'm considering the possibility that the deployment might have made changes/failed to make changes to the IIS configuration thus causing this problem. Perhaps a complete rebuild (minus another web deployment attempt) would solve the problem, but I'd rather fix (thus understand) the current problem. Any ideas so far? I've just had another attempt at fixing this issue, and I've made some progress, but I don't have a complete fix...yet. I've discovered that if I access the sites via IP address (than via NetBIOS name), I get the same dialog, except that it accepts my credentials and thus the application works - not quite a fix, but a useful step. More interestingly, I discovered that if I disable Kernel-mode authentication (in IIS Manager Website Authentication Advanced Settings), the applications work perfectly. My foggy understanding is that this is effectively working in the pre-IIS7 way. A reasonable short-term solution, but consider the following explicit advice from IIS on this issue: By default, IIS enables kernel-mode authentication, which may improve authentication performance and prevent authentication problems with application pools configured to use a custom identity. As a best practice, do not disable this setting if Kerberos authentication is used in your environment and the application pool is configured to use a custom identity. Clearly, this is not the way my applications should be working. So what is the issue?

    Read the article

  • DLINK WBR-1310B Wireless Router seems to hang...

    - by Ira Baxter
    I have a brand new DLINK-1310B Wireless Router (box never before opened, although I bought it at the neighborhood computer junk store). I am using it at home (and in fact am using it at this instant from a wireless laptop). When operative, I can ping it at 192.168.0.1, and I can log into it from the PC attached to it by LAN and from the wireless PC at //192.168.0.1. In the course of the day since I've installed, it seems to have locked up 3 times. Each time the symptoms are my web browser (or other IP service, e.g., POP3) stops with a "No internet connection" error. Attempts to contact the router via 192.168.0.1 get no reaction, from either the wireless laptop or from the hardwired PC sitting next to it. It doesn't respond to pings to that address either. Power cycling the router fixes it. I've seen discussion in other questions about aging cheap electronics. Its too new to be aged. Anybody else seen this behavior with a DLINK-1310? Or do I just need to exchange it for another and try again? (I hate rolling dice, I bought the DLINK because a previous Linksys died of apparant heating problems, how many do I have to cycle through before I get something that works and is long-term stable?). Remarkably, nobody talks about how much software is in a router. Is the stuff just buggy? EDIT: Happened again, while I was working on the wireless Vista laptop. (Seems like once an hour?) I was a little more careful this time. The wireless laptop can ping it. It can't get the login screen. I visited the LAN-connected PC (takes me a minute to walk from the laptop to the PC at the other end of the house), and attempted to visit a random web page. Surprise, that worked! And, now, after a minute walking back to the laptop, I can reconnect the wireless laptop, and get to the login page from it. Strange the time/date has been reset back to 2002. (I'll swear I set it and saved the system configuration after updating the firmware; it made me redo every other bit of reconfiguration again). Is there something funny about wireless leases expiring? The router says the leases it is handing out are good for 180 minutes, and the delay-to-inaccessible was only about an hour. The DSL connection seems to have a 10 minute lease.

    Read the article

  • Windows 7, weird/annoying explorer problem

    - by Shiki
    First of all, I'm asking the moderators if you could summarize my problem in the Title and modify it according to that. Thanks in advance and sorry, my english is not that good. The problem. It happened on two of my machines already. Once it happens there are many problems with explorer. I'll try to describe at precisely I just can. So. Basically when you create a new folder, you have to hit the enter key TWICE because first it'll throw an error that "there is no such file..". Okay. Then if you try to delete some, this happens also. Renaming is the same. Here is an example. I tried adding a "2" to the end of the folder name, hit enter and got this. Here I can press cancel, nothing happens. If I press retry, the popup disappears and it'll finally change the folder name. I can't come up with anything for this. Using BitDefender (registered, full protection), Windows 7 x86 Ultimate retail (boxed, english version). The other PC had the same anti-virus protection , but it have a Windows 7 x64. Both systems are activated, updated totally. I don't remember installing any new stuff on the PCs. All the stuff on the PCs are legal, no cracked / any other software present. It just happened, so to say. Already reinstalled the other PC because I thought * I * messed up something and its just something FUBAR. However, I dont want to reinstall my laptop also. Any ideas are welcome. (Today I had a strange bug. Windows Explorer had 1.0+gb memory usage with 100% cpu. Killed it, launched a new explorer.exe and thats all. Nothing changed, but it may worth a try. The other PC did NOT have this problem, ever.) Isn't there a registry fix for this or something? :/

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • Subversion/Hudson/Sonar/Artifactory - too much for my little server to handle! Help!

    - by Ricket
    I have a little dedicated server. It's at a cheap price and has a simple AMD 1800+ (1.5ghz), 256mb DDR RAM, ...need I continue? And I think I'm overloading it already. I have installed the following, and it's running CentOS 5.4: Webmin Apache MySQL Subversion as an Apache module Hudson (standalone) Sonar (standalone, runs with a standalone Jetty install) Artifactory (standalone) That's pretty much it. But I'm having problems; pages are loading quite slowly. Network speed of the server is excellent, but I think I'm just running out of CPU and/or memory. A side-effect of the pages loading slowly is that sometimes Hudson times out, not being able to start Maven or contact Sonar in a certain amount of time. I think the next step to speed things up might be to move to an application server and use the WAR version of Hudson, Sonar and Artifactory together on that server. I don't know that it will help, but it just seems to make sense, especially with Sonar running on its own Jetty install and the other two probably running their own mini application servers as well. Am I correct in thinking this? Is this the right course of action? Any other tips on how to make the server run faster? I can post more data if you'd like, just let me know what else would help you answer my question. Oh, also just to cure any suspicions, I don't have any sort of virus or spyware. I protect my SSH access with DenyHosts (which has blocked 300+ brute forcers in the past few months), and I have confirmed that the top four processes in terms of memory and CPU usage are Sonar, Artifactory, Hudson, and MySQL. Edit: I just thought of another thing that I'd like you to comment on as well: Apache currently has 8 spawned slave processes, taking 42MB of ram apiece. This is not my web server. Is everything else able to function if I shut down Apache? Can you point me towards a tutorial or something on migrating Subversion from Apache into something that might work along with the other three applications, maybe even make Subversion a WAR file or something?

    Read the article

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • windows 2008 R2 TS printer security - can't take owership

    - by Ian
    I have a Windows 2008 R2 server with Terminal server role installed. I'm seeing a problem with an ordinary user who is member of local printer operators group on the server. If the user opens a cmd window using ‘run as administrator’ they can run printmanager.msc without needing to enter their password again. In printmanager they can change the ownership of redirected (easy print) printers without problems. If, from the same cmd window, they use subinacl to try and change the onwership of the queue to themselves they get access denied: >subinacl.exe /printer "_#MyPrinter (2 redirected)" /setowner="MyDom\MyUsr" Elapsed Time: 00 00:00:00 Done: 1, Modified 0, Failed 1, Syntax errors 0 Last Done : _#MyPrinter (2 redirected) Last Failed: _#MyPrinter (2 redirected) - OpenPrinter Error : 5 Access denied so, same context, same action but one works and one doesn't. Any ideas for this odd behaviour? I'm using subinacl x86 on an x64 server as I can't find anything more up to date. I've tried with icacls and others but couldn't get them to do anything with printers. EDIT: added after Gregs comments regarding setacl below If I log into the TS server as Testusr and open Admin Tools Printer Admin (as administrator) and then type mydomain\testusr and the testusr's password, then I can change the ownership of the printer queue and set testusr as the owner. However if I open cmd as administrator and, again, type mydomain\testusr and the users password when I try to change the ownership of my redirected printer I get the following: C:\>setacl -on "Bullzip PDF Printer (12 redireccionado)" -ot prn -actn setowner -ownr n:mydom\testusr WARNING: Privilege 'Back up files and directories' could not be enabled. SetACL's powers are restricted. WARNING: Privilege 'Restore files and directories' could not be enabled. SetACL's powers are restricted. INFORMATION: Processing ACL of: <Bullzip PDF Printer (12 redireccionado)> ERROR: Enabling the privilege SeTakeOwnershipPrivilege failed with: No todos los privilegios o grupos a los que se hace referencia son asignados al llamador. [meaning not all referenced privs or groups are assigned to the caller] SetACL finished with error(s): SetACL error message: A privilege could not be enabled maybe I'm getting something wrong but if the built in windows tool can do it with just membership of the 'print operators' group then setacl should be able to as well, no? However setacl seems to depend on other privileges, which in reality are not required to do this.

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7?

    - by Timo
    The question says it all, but more detail follows ;) I've got a new computer that runs Windows 7 64-bits (Home Edition) and I'd like to connect it to my wireless home network (Sitecom wireless gigabit router 300N wl-352 v1 002) with a Sitecom wireless USB micro adaptapter 300 wl-352 V2 001. After installing the router (i.e. connected to the modem and power) and ensuring that wireless is indeed enabled, I've installed the driver of the USB adapter on the new computer described above. After the installation (drivers and utility on CD) completes successfull I rebooted my computer and inserted the USB adapter. After discovering the right network and connecting to it using the network key, a connection is succesfully made. (Using the Sitecom 300N USB Wireless LAN utility). In the LAN utility I can see that the signal strength is approximately 50% and connection quality is approximately 80%. Judging from these numbers I assumed that all was fine and started to use the connection (reading news on nu.nl, a dutch news site), but noticed that the connection was lost several times in a very short time span, but each time the connections was resumed, resulting in the 50/80 percent numbers described above. However, the website was not loaded completely and often a timeout would be reported. When inspecting the drivers through Device Management (Windows' Apparaatbeheer in dutch) there were no errors/warnings; everything seemed to be in order. In an attempt to solve this, I downloaded the latest drivers for the USB adapter, but the problems remained. Finally I tried to connect the computer with a Siemens Gigaset USB Adapter 108. This process was a troublesome since I had to download a driver (from the site above) and tell Windows (7) to use the Windows Vista driver when installing the new hardware, since there is (was) no Windows 7 driver available. This resulted in a usable connection, although not very stable when reconfiguring the router. Which took the form of selecting a different wireless channel on the router, even using the Sitecom utility mentioned above to check if there were other networks communicating on that channel (and thus picking a channel that was not used by other networks). Again no result when changing back to the Sitecom USB adapter. Note that this means (I think) that I could use the internet connection with the Siemens adapter, meaning the problem was not in the router. So: How to get wireless working (properly) with Sitecom Wireless USB micro adapter 300N on Windows 7? PS Sorry, but should be able to post one link, while I had links in place for the USB adapter, router and the siemens adapter in place as well, but I'm not (yet) allowed to post these... (The site says I can post one link, but only when no links are present will it allow me to post the question...)

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • Hang while starting several daemons

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before.

    Read the article

  • Can't Move Windows to 2nd Monitor without Left Mouse and Cntl Key

    - by John C
    I have 2 very frustrating problems that maybe someone can help me with: I have 2 monitors (different sizes and resolutions) setup with the "Extended" monitor Win7 setup. My problem is this = I can not "move" a window from my Primary Monitor (larger and higher resolution on right side in front of me) to my Secondary 2nd monitor (smaller and lower resolution) with just selecting the title bar with the left mouse button and dragging it to the left. Windows 7 "snaps" it back to the left Primary Monitor when the window is physically in the 2nd window area as I'm holding the left mouse button. I can prevent this problem - by holding down the Cntl Key with the Left Mouse button, but this is extremely annoying to me. Also I typically "lose" focus if I try typing input on the 2nd monitor. Typing is erratic with regard to keystroke accuracy from my keyboard translated into input on the 2nd screen. No problem with typing input on the primary left monitor. I find this extremely annoying in Windows 7 and turning off the "snap" feature via the Control panel does NOT work for me. Win7 stubbornly refuses to move my selected window to my 2nd monitor without me "forcing" Win7 to do this with the Cntrl Key. Please tell me this is not a Win7 feature. Also on my system - Windows Key + Shift, Left arrow Key (pressed together) or the same combo with The Right arrow Key - don't do anything whatsoever. Widows Key with "+" however does maximize current window across both monitors, and I can "restore" it with Windows Key and "-" back to original monitor and size. I have tried various solutions including changing the resolutions of one or both of my monitors and sometimes "temporarily helps" but reverts back to the problem. Also if I swap the logical (not physical) layout so that I tell Win7 the monitors are setup in a reserved situation (Large monitor on the left, and small on the right) - this also sometimes helps for awhile - and is very strange and awkward to work with "backwards". But all of these solutions stop working. The only solution that consistently works for "moving" the screens is to hold the Cntrl Key down as I'm moving window with the left mouse selected on the title bar. Even that however, doesn't prevent the loss of typing focus for me on the 2nd monitor - while at the same time the typing on the 1st monitor is fine. Any help on moving my window screens from one monitor on my 2nd monitor without having to press the Cntrl key while holding down my left mouse button with be appreciated. Also any help on gaining typing "focus" into my 2nd screen with be helpful too. Thanks - John

    Read the article

  • Wireless Network Performance Issues

    - by colithium
    My brand new Dell XPS system has been running flawlessly except its abysmal download speeds. I have tried isolating every variable I could possibly think of but I can't figure out the problem. I've talked to Dell and Belkin without making progress (thought I'd try). Here are the speeds: Note that most of the time, upload speeds are actually much faster than download speeds (around 4.0 Mb/s which is better than most other devices on the network) It's not the ISP. The slowdown happens even when transferring files inside the network. Plus every other wireless device gets approximately this: It's not the wireless router. It's a Lynksis WRT160N v1 with the latest firmware (1.02.2). Plus everything else connected to it has normal speeds. It's not the browser. Speeds are the same in IE, FF, and when transferring files with Windows between computers. It's not the wireless adapter. I've tried a Belkin N Wireless USB Adapter (which works fine on another computer) and a Dell Wireless Draft 802.11n WLAN Mini-Card. They have the same slow speeds when connected to the problem computer. It's not the adapter connection. One adapter used USB and the other is a Mini-Card. It's not antenna placement. With the same antenna position and the same device, I get different speeds when connected to the problem computer vs a good computer. Plus everything reports the connection speed as at least 11Mbps and good signal strength. I've tried disabling IPv6 since it sometimes causes weird problems. I've tried disabling Windows Firewall/anti-virus. I've ensured the computer has updated drivers for both adapters. I've ensured that Windows is up to date and so is the BIOS. For the USB adapter I ensured that that USB port functioned at normal speeds with other USB devices. What else could it possibly be? I finally received my copy of Windows 7 and will be trying that. I'd rather not install Windows 7 because of a particular program that will stop working so a solution besides that is welcome. Specs: Vista x64 Core i7 920 6GB RAM 500GB HD GTX 260

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • WIndows 7 cannot boot - bootrec reports FS not found or corrupt

    - by purecharger
    For 3 days now I've been unable to boot into my Windows 7 partition, and all my research has been to no avail. I'm hoping someone here has more ideas on how to fix this. When I boot up now, I get the black screen with BCD error that says theres no valid file system or it may be corrupt (pardon my lack of detail, no copy/paste is available then). When I boot with the Windows 7 disc and go into repair tools, no operating system is found, and attempting to automatically repair the problem fails with Unknown Operating System (Unknown Disk) or something similar. When I drop into the command prompt, I am able to see and navigate my C:\ drive without issue. I attempt to use bootrec: C:\> bootrec /ScanOS Finds C:\Windows as a system partition. C:\> bootrec /RebuildBCD Fails with volume does not contain a recognized file system. please make sure that all required file system drivers are loaded and that the volume is not corrupted. So then I attempt to fix the bootsector: C:\> bootsect /nt60 C: /force Which completes successfully (sorry, no output..) Upon rebooting, I have the same problem. I've also tried all of the above after making my Windows partition active: C:\> diskpart DISKPART> select disk 1 DISKPART> select partition 1 DISKPART> active DISKPART> exit Then bootrec as above, both with and without a reboot after the DISKPART commands. Then I've also tried rebuilding the BCD store by hand: set systemdrive=C: set tempbcd=C:\boot\bcd.temp set tempfile=C:\boot\temp.txt bcdedit -createstore %tempbcd% bcdedit.exe -store %tempbcd% -create {bootmgr} -d "Windows Boot Manager" bcdedit -store %tempbcd% -create -d "Windows Vista" -application osloader>%tempfile% set /p winvistaguid= <%tempfile% set winvistaguid=%winvistaguid:~10,38% bcdedit -store %tempbcd% -set %winvistaguid% osdevice partition=%systemdrive% bcdedit -store %tempbcd% -set %winvistaguid% device partition=%systemdrive% bcdedit -store %tempbcd% -set %winvistaguid% path \Windows\system32\winload.exe bcdedit -store %tempbcd% -set %winvistaguid% systemroot \Windows bcdedit -import %tempbcd% However on the import, I get my familiar friendly message: volume does not contain a recognized file system. please make sure that all required file system drivers are loaded and that the volume is not corrupted I'm at my wits end here, and I cannot understand why Windows refuses to see this as a valid install. When I list the disk/partition in DISKPART, it shows up as NTFS and "Healthy", and I can navigate the directory structure from DOS with no problems. I really, really do not want to reformat and reinstall. I know this problem can be solved!

    Read the article

  • WIndows 7 cannot boot - bootrec reports FS not found or corrupt

    - by purecharger
    For 3 days now I've been unable to boot into my Windows 7 partition, and all my research has been to no avail. I'm hoping someone here has more ideas on how to fix this. When I boot up now, I get the black screen with BCD error that says theres no valid file system or it may be corrupt (pardon my lack of detail, no copy/paste is available then). When I boot with the Windows 7 disc and go into repair tools, no operating system is found, and attempting to automatically repair the problem fails with Unknown Operating System (Unknown Disk) or something similar. When I drop into the command prompt, I am able to see and navigate my C:\ drive without issue. I attempt to use bootrec: C:\> bootrec /ScanOS Finds C:\Windows as a system partition. C:\> bootrec /RebuildBCD Fails with volume does not contain a recognized file system. please make sure that all required file system drivers are loaded and that the volume is not corrupted. So then I attempt to fix the bootsector: C:\> bootsect /nt60 C: /force Which completes successfully (sorry, no output..) Upon rebooting, I have the same problem. I've also tried all of the above after making my Windows partition active: C:\> diskpart DISKPART> select disk 1 DISKPART> select partition 1 DISKPART> active DISKPART> exit Then bootrec as above, both with and without a reboot after the DISKPART commands. Then I've also tried rebuilding the BCD store by hand: set systemdrive=C: set tempbcd=C:\boot\bcd.temp set tempfile=C:\boot\temp.txt bcdedit -createstore %tempbcd% bcdedit.exe -store %tempbcd% -create {bootmgr} -d "Windows Boot Manager" bcdedit -store %tempbcd% -create -d "Windows Vista" -application osloader>%tempfile% set /p winvistaguid= <%tempfile% set winvistaguid=%winvistaguid:~10,38% bcdedit -store %tempbcd% -set %winvistaguid% osdevice partition=%systemdrive% bcdedit -store %tempbcd% -set %winvistaguid% device partition=%systemdrive% bcdedit -store %tempbcd% -set %winvistaguid% path \Windows\system32\winload.exe bcdedit -store %tempbcd% -set %winvistaguid% systemroot \Windows bcdedit -import %tempbcd% However on the import, I get my familiar friendly message: volume does not contain a recognized file system. please make sure that all required file system drivers are loaded and that the volume is not corrupted I'm at my wits end here, and I cannot understand why Windows refuses to see this as a valid install. When I list the disk/partition in DISKPART, it shows up as NTFS and "Healthy", and I can navigate the directory structure from DOS with no problems. I really, really do not want to reformat and reinstall. I know this problem can be solved!

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • How to Setup Ubuntu Mail Server with Google Apps?

    - by Apreche
    I have a domain, let's call it foobar.com. All of the MX records for foobar.com point to Google's mail servers because I am using Google Apps for your domain to manage it. It's great because everyone gets all the advantages of GMail, but our e-mail addresses aren't @gmail.com. I also have a server. Primarily, it's a web server, but it also serves other things. One of the things it serves is the web site for foobar.com and also sites for various virtual hosts such as shop.foobar.com and forum.foobar.com. The server is running Ubuntu 8.04, because I like using LTS releases in production. The thing is, there are various applications running on the server that need the ability to send out emails. Various applications, like the cron jobs, send me e-mails in case of errors. Some of the web applications need to send e-mail to users when they forget their passwords, to confirm new registered users, etc. Lastly, it's nice to be able to send e-mail from the command line using the mail command, or mutt. How can I setup the mail on the web server to go through the Google apps mail servers? I don't need the web server to receive mail, though that would be cool. I do need it to be able to send mail as any legitimate address @foobar.com. That way the forum application can send mails with [email protected] in the from field, and the ecommerce application will have [email protected] in the from field. Also, by sending the mail through the Google servers, we can avoid a lot of the problems with the e-mails being blocked by various spam filters on the web. Google's SMTP servers are trusted a lot more than mine would be. I'm pretty good with administering Linux systems, but I am absolutely brain dead when it comes to e-mail. I need step by step directions from beginning to end on how to set this up. I need to know every thing to install, and every single change to the configuration files that is necessary. I have tried following various howtos and guides in the past, but none of them were quite right. Either they didn't work at all, or they offered a configuration that is not what I wanted. Please help. Thanks.

    Read the article

  • Why can't I connect to remote Microsoft SQL Server through SSH tunnel?

    - by Alexander
    I have at home a D-Link DIR-615 C1 router with DD-WRT. I set up the SSH server on the router, and log on through an SSH2-RSA passphrase-protected key. That router is the gateway between the local network and the internet. One of the computers on that network has Microsoft SQL Server 2008 installed, with TCP/IP protocol enabled through port 1433. I've set up port forwarding on the router, so that remote connections are possible and are, in fact, working (some developers log on remotely without problems). I am part of another network, that has internet access through a proxy server, which only has ports 80 and 443 opened. I can't connect to that MSSQL server on that remote server because 1433 port is closed on this network. I connected (using Putty) through 443 port to my router's SSH server, and set up 2 tunnels. One is for RDP (3389), and it's working. The other is for 1433 port, to connect to the server. I can't connect through the SSH tunnel to the MS SQL Server, neither through telnet, or through GUI clients. Am I missing something? Additional details: on connect, I get this error from SQL Server Management Studio: TITLE: Connect to Server Cannot connect to localhost:14330. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 3) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=3&LinkId=20476 BUTTONS: OK The tunnel is configured like this: L14330 192.168.0.103:1433 192.168.0.103 is the permanent address of the SQL Server on the LAN. I also successfully forwarded TCP traffic of 3389 port to that IP, so tunneling is working to that IP address. When connecting without tunnel, through Microsoft SQL Server Management Studio, using the same method the connection establishes. Too bad my proxy doesn't allow 1433 port traffic, I wouldn't have this headache.

    Read the article

  • Exchange 2010 Internal Auto Discover Migrate away from current .local DNS name

    - by Bryan
    We have an Exchange 2010 Server, running within our Active Directory domain, with an internal hostname of server.example.local. The server is configured for Exchange anywhere, but currently has a self signed certificate with a name of server.example.local installed. Internally, clients connect and work fine, but externally, we are having certificate errors as you would expect. I'm about to purchase a UCC SSL Certificate to install on the server with all the relevant SANs on the certificate to correct this, but due to obvious problem obtaining a trusted cert with .local as a subject alternative name, I'm looking to configure clients on the internal network so that they don't use any reference to the .local hostname. I've configured our external DNS name for the server as exchange.example.com, and have created an CNAME for autodiscover.example.com which also (correctly) points to exchange.example.com. I've also configured internal DNS records for these two hostnames which point to the internal interface of the same server. I don't anticipate any problems here. I'm now trying to reconfigure Auto Discover internally, so that Outlook attempts to connect to exchange.example.com. I've followed the steps in KB940726 to prepare for this, and this appeared to work fine. No errors were generated and I was able to verify the CAS name in AD using ADSI edit. I've just tried testing this with a newly created test user account complete with a new Exchange mailbox, and Outlook 2007 connects fine on the internal network, but looking deeper in the Exchange profile, Outlook is still resolving the server name as server.example.local. Could it be the self signed cert, that is causing Outlook to display the server name as server.example.local, or is there still something wrong with my internal autodiscover configuration? Edit I've proven it isn't the certificate that is responsible for outlook returning server.example.local, by installing another self certified certificate with a name of test.example.com. When creating a new outlook profile, I get the mismatch error I'm expceting, but after accepting the cert, and finishing the config of the Outlook profile, again it still shows server.example.local as the server name. This means that if I were to purchase the UCC cert now, that external client would work fine, but internal clients would show a certificate name mismatch. Any ideas where to start diagnosing this?

    Read the article

  • How to conform to update-rc.d with LSB standard?

    - by user34881
    This is a migrated question from stackoverflow, as I was told, this is the place for it to be. http://stackoverflow.com/questions/2263567/how-to-conform-to-update-rc-d-with-lsb-standard I have set up a simple script to back up some directories. While I haven't had any problems setting up the functionality, I'm stuck with adding the script to rcX.d dir's using update-rc.d. My script: #! /bin/sh ### BEGIN INIT INFO # Provides: backup # Required-Start: backup # Required-Stop: # Should-Stop: # Default-Start: 0 6 # Default-Stop: # Description: Backs up some dirs ### END INIT INFO check_mounted() { # Check if HD is mounted } do_backup() { if check_mounted; then # Some rsync statements. fi } case "$1" in start) do_backup ;; restart|reload|force-reload) echo "Error: argument '$1' not supported" >&2 exit 3 ;; stop|"") # No-op ;; *) echo "Usage: backup [start]" >&2 exit 3 ;; esac : Using update-rc.d backup start 10 0 6 . I get the following warnings and errors: update-rc.d: warning: backup start runlevel arguments (none) do not match LSB Default-Start values (0 6) update-rc.d: warning: backup stop runlevel arguments (0 6.) do not match LSB Default-Stop values (none) update-rc.d: error: start|stop arguments not terminated by "." The syntax I try to use is the following: update-rc.d [-n] <basename> start|stop NN runlvl [runlvl] [...] . Google wasn't that helpful at finding a solution. How can I correctly set up a script and add it via update-rc.d? I'm using Ubuntu 9.10. UPDATE Using update-rc.d backup start 10 0 6 . stop 10 0 . the error disappears. The warnings about default values persists: update-rc.d: warning: backup start runlevel arguments (none) do not match LSB Default-Start values (0 6) update-rc.d: warning: backup stop runlevel arguments (0 6 0 6) do not match LSB Default-Stop values (none) It even is added to the appropiate rcX-dirs but it still does not get executed...

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >