Search Results

Search found 20755 results on 831 pages for 'custom map'.

Page 669/831 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • Numeric UIDs/GIDs in ACLs on OS X server (10.6)

    - by Oliver Humpage
    Hi, On one (old OS X 10.4) server I'm tarring up some files which have ACLs. I'm then using ``tar -xp'' to untar the archive onto a new 10.6 server, which doesn't have any users/groups configured on it yet except the default admin (UID 501) (there's a reason for that, don't ask!). Obviously this means an "ls -lne" will list files and ACLs with numeric UIDs and GIDs. Now for the normal file permissions it makes sense: you get UIDs like "1037". And for some ACLs, it also makes sense: you get things like "AAAABBBB-CCCC-DDDD-EEEE-FFFF00000402" for groups (0x402 = GID 1026) and "FFFFEEEE-DDDD-CCCC-BBBB-AAAA000001F5" for users (0x1F5 = UID 501). However, some ACLs have a UIDs like "E51DA674-AE70-41BC-8340-9B06C243A262" or GIDs like "0A3FCD24-0012-46FA-B085-88519E55EF29" and I have absolutely no idea how to translate these IDs back into something that could be matched back to the original IDs (UID 1072 and GID 1047 respectively in this example). Can anyone help me translate these weird long hex strings? (Basically we're moving from local users to an Active Directory setup, so I want to move all files to the new server with permissions intact, then chmod, chgrp and set ACLs such that we translate old IDs to the new AD IDs. Hence needing some way to map between the sets. I don't believe there's an easier way to do this?) Many thanks, Oliver.

    Read the article

  • How do I fix a corrupt calendar cache?

    - by Blacklight Shining
    I was tailing /var/log/system.log and noticed a sudden wall of text. Looking closer, I saw it was an error CalendarAgent got while trying to save something: Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: CoreData: error: (11) Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: Core Data: annotation: -executeRequest: encountered exception = Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' with userInfo = { NSFilePath = "/Users/blackl/Library/Calendars/Calendar Cache"; NSSQLiteErrorDomain = 11; } 2 messages repeated several times Nov 18 11:42:49 rainbow-dash.local CalendarAgent[12321]: [com.apple.calendar.store.log.subscription] [WARNING: CalSubscriptionSession :: persistError :: save failed] This entire sequence is repeated many times throughout the log. file said the file in question was a SQLite 3.x database, so I did a bit of searching and came up with a way to check those. blackl% cp -i ~/Library/Calendars/Calendar\ Cache /tmp blackl% sqlite3 /tmp/Calendar\ Cache SQLite version 3.7.12 2012-04-03 19:43:07 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma integrity_check ; *** in database main *** Main freelist: Bad ptr map entry key=863 expected=(2,0) got=(5,21) On page 21 at right child: 2nd reference to page 863 This is followed by a few dozen lines like these: rowid <number> missing from index <name> and then: wrong # of entries in index <name> I'm at a bit of a loss as to what to do now—I couldn't find anything on how to fix the errors that I found. Also, it would probably be a good idea to disable Calendar Agent so it doesn't try to use the database while it's being fixed (that's why I copied it to /tmp before running sqlite3 on it.) How do I disable CalendarAgent and fix its cache?

    Read the article

  • Intermittent PHP error: Undefined function <core function>

    - by Daniel
    In the last week I've been coming across an incredibly annoying error on one of Slicehost slices. It appears that every now and then PHP will fail with a fatal error, saying a certain function is undefined. The function changes, but is always a core PHP function e.g. defined(), version_compare(), etc. This problem has occurred while using several different PHP applications - PHPMyAdmin, my own custom built apps, etc, leading me to believe that the problem is not specific to the running code. Here are some details: - Debian Lenny - Apache 2.2.9 - PHP 5.2.6-1+lenny4 with Suhosin-Patch (running eAccelerator 0.9.6) Apache and PHP are installed from Debian packages. Error logs show nothing out of the ordinary. I thought memory might be an issue, but free -m reports upwards of 100MB free almost all the time. Another thing I'm trying to investigate is if the problem might be related to eAccelerator, but testing this theory out is incredibly hard because the issue doesn't appear very often and I've been using eAccelerator for months on this install without any problems up until now. Has anyone ever come across anything like this? Why would PHP report undefined core functions?

    Read the article

  • Replacing stock Core 2 Duo heatsink fan (just the fan really) with a Dell CPU fan

    - by user647345
    My old heatsink fan broke and I'm trying to reconnect its plugs to a new fan. My Dell CPU fan has some custom Dell plug. I snipped the old fan's wire in half and kept the plug on the end of it. I want to connect it to the Dell fan wire to the plug. The motherboard is a P5Q-e, the stock Core 2 Duo fan was .20A and the dell is .70A. Is that going to matter? The wire from the fan has four wires, the wire with the plug has four wires. They share three, of the four colors: red, black, and blue. Dell's fourth wire is white, while the plug's fourth wire is yellow. Is it safe to assume that I just connect the yellow and the white plug together and match the rest up? I don't want to take any risk of damaging anything. It runs fine passively without a fan, but I have speedstep on, so I would like to use this fan and just fasten it to the heatsink with some twist ties and paperclips and call it a day.

    Read the article

  • wrt54gl reboots; troubleshooting steps?

    - by Bill
    I am using about 10 wrt54gl's in a small school. I am using a combination of stock firmware and Tomato 1.25, slowly moving towards all Tomato. We have had these devices installed for several years without problems. Recently, more and more of the units have started to spontaneously reboot, usually during high-traffic times (but not always). For the most part, the rebooting is not critical for us, but the wrt54gl's temporarily revert to 192.168.1.1 on the LAN ethernet ports and conflict with a critical server that's already installed with that IP. (Yes -- we plan to move the server off that address, but it is an involved process.) Both Tomato and the stock firmware (several versions from recent to several years old) exhibit the same problem: random reboots and reverting to 192.168.1.1 and conflicting temporarily with our server until the firmware boot process finishes. Here are my questions: Any way to prevent the wrt54gl's from reverting to 192.168.1.1 during the boot process? I was thinking of doing a custom firmware mod, although I hate to go that direction. Any steps to take in troubleshooting the reboots? Only some of the wrt54gl's reboot, which is odd. Others stay online for weeks and months without issues. Thanks.

    Read the article

  • Network Traffic Log

    - by Chris Becke
    Background - On my "home" network I have a Linksys WTR45GL router providing my internet access as well as a wireless AP. Connected I have * 2 Windows PCs (wired) * At least one laptop (Wired) * Some 802.11 enabled handheld consoles (PSPs) * A Nintendo Wii * Some windows XP pcs used by the people in the granny flat. Where I live, South Africa, well, 1Gb worth of monthly cap is, while not expensive, costly enough that I'd like to be sure that all the bandwidth used by devices on my network is ... well ... legitimate and not the result of neighbors parasiting my wireless, malware or just the result of "liberal" download policies in my software. I got the Linksys WRT45GL on the understanding that there were custom firmwares (DD-WRT and Tomato) that allowed bandwidth tracking, but there doesn't seem to be any facility to get a log of traffic that can be examined to see (a) which local devices were the biggest consumers of bandwidth and (b) what they were connected to. What tools are there for logging traffic such that, when it gets to that OMG moment in the month when all my bandwidth is gone, I have a chance to find out what the hell used it all up (and hopefully attempt some corrective action).

    Read the article

  • How to disable auto insert notification in Windows 7?

    - by White Phoenix
    Alright, here's the problem. My hard drive activity light on my custom built PC is blinking exactly once every second. Microsoft has this to say on the issue: http://support.microsoft.com/kb/138598 There has been discussion on this issue several months ago: Why does my hard drive LED light blink every second? The problem seems to stem from primarily Windows 7 polling the CD-ROM/DVD drive every second to see if something is inserted. The Windows 7 users in the thread that was linked in the superuser question, https://social.technet.microsoft.com/Forums/fi-FI/w7itprohardware/thread/4f6f63b3-4b58-4154-9298-1566100f9d00, have confirmed that this IS a known issue with Windows 7. Some people point at the motherboard circuitry causing the CD-ROM and SATA activity to both be linked to that hard drive activity, but whatever the case, the temporary solution seems to be to disable the CD/DVD-ROM drive in Device Manager. In fact, disabling the CD/DVD-ROM does stop the blinking, but of course this solution is counterproductive, because I shouldn't have to entirely disable a device to fix this problem. I've done the following suggestions in that thread: Change the autorun registry entry to 0 Completely disable autoplay in the autoplay control panel Disable autoplay in the Local Group Policy Editor. None of these stop the blinking from happening - apparently these solutions work for both XP and Vista, but it seems to be different in Windows 7. So I'm wondering if anyone has found out how to completely disable the polling in Windows 7, or if this will just have to be an issue we will have to deal with. There's no option to disable the auto insert notification when you go to the device within device manager (there was in XP), so I got no idea where this option is hidden, or if there's a registry key entry I could change to stop the polling. Anyone have any idea?

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Is it ever good to share a userid?

    - by Ladlestein
    On Un*x, Is it ever a good idea to have one userid that many different people log into when they do stuff? Often I'm installing software or something on a Linux or BSD system. I've developed software for 24 years now, so I know how to make the machine do what I want, but I've never had responsibility for maintaining a multi-user installation where anyone really cared about security. So my opinions feel untested. Now I'm at a company where there's a server that many people log into with a single userid and do stuff. I'm installing some software on it. It's not really a public-facing server, and is only accessible via VPN, but it's used by many people nonetheless, to run tests on custom software, things like that. It's a staging server. I'm thinking that at the very least, using a single user obscures an audit trail, and that's bad. And it's just inelegant, because people don't have their own spaces on the server. But then again, with more userids, maybe there's a greater chance that one can be compromised, allowing attackers to gain access. ?

    Read the article

  • IIS 7 URL Rewrite to GeoServer running on Apache

    - by Maxim Zaslavsky
    I'm building a mapping application based on OpenLayers that uses GeoServer to serve up mapping data. The problem I'm having is that besides the map images I'm requesting through WMS, I'm using jQuery AJAX to get information from GeoServer. As GeoServer is running on a different port, my requests are being blocked due to cross-site scripting security policies in JavaScript. As a Java application, GeoServer runs on Apache on port 8080, while my IIS instance is running on port 80. Instead of building a proxy, I've decided to use URL Rewriting in IIS7 to fix this problem. I'm following this guide, but it's still not working. Here are my URL Rewrite rule settings: Matches URL: (.*) Condition: {HTTP_URL} matching /geoserver Action: rewrite to http://localhost:8080/{R:1}, appending query string When I request http://localhost/geoserver/wms?QUERY_LAYERS=SanDiego:FWSA_sandiego&LAYERS=SanDiego:FWSA_sandiego&SERVICE=WMS&VERSION=1.1.1&FEATURE_COUNT=20&REQUEST=GetFeatureInfo&EXCEPTIONS=application/vnd.ogc.se_xml&BBOX=-13009123.590156,3862057.2905992,-13006066.109025,3865114.7717302&INFO_FORMAT=text/html&x=20&y=20&width=40&height=40&srs=EPSG:900913, however, all I get is a 404, although the same request on port 8080 returns the proper result. What am I doing wrong? Thanks in advance.

    Read the article

  • Port translation in router causing some email to fail

    - by user22037
    We are in the process of setting up a spam filter (SAVASM). One change we are making is to push incoming email on port 25 through our spam filter/server but have users actually send their email on a different port. I am attempting to make this happen by using port address translation to send port 25 traffic to the SAVASM server IP. As a step in making this change I setup port translation without actually changing the IP addresses. The NAT rules for the email server went from one Static NAT rule with no port specified, to multiple Static NAT rules each with a port or group matching the Access Rules for that server (smtp, pop3, http, https, and some other custom ports). The problem we are running into is confusing. Some outgoing mail through this server is failing when the router has the multiple NAT rules with port translation settings. Email goes through fine FROM our email to our internal accounts and to Gmail. However email fails when FROM our client's email address TO our client's email or their personal Comcast. The only situation that worked for them was if they changed FROM to Comcast and then messages went through fine to both Comcast and the client's accounts. Switching back to regular Static NAT rule everything then worked for them. Does anyone have a clue as to what might be going on? We are on a Cisco ASA 5500 box.

    Read the article

  • ipv6 : why ndp resolves to global scope address?

    - by Julien
    I'm facing a strange ipv6 behavior and I don't know how to solve it because I'm not familiar with ipv6. Maybe this behavior is normal. I hope that you will help me. ( I'm running under debian 6.0.9 with a custom kernel 3.2.58 ) machine A is "2a00:7d30:edf6:100::1" wants to ping machine B, which is "2a00:7d30:edf6:100::10". Both are on the same segment. machine A asks for the address of machine B and I don't understand why machine B gives its global scope address instead of the local scope one ? 10:59:02.082785 IP6 2a00:7d30:edf6:100::1 ff02::1:ff00:10: ICMP6, neighbor solicitation, who has 2a00:7d30:edf6:100::10, length 32 10:59:02.082821 IP6 2a00:7d30:edf6:100::10 2a00:7d30:edf6:100::1: ICMP6, neighbor advertisement, tgt is 2a00:7d30:edf6:100::10, length 32 after that machine A pings the global scope address of machine B and it works fine : 10:59:02.082927 IP6 2a00:7d30:edf6:100::1 2a00:7d30:edf6:100::10: ICMP6, echo request, seq 1, length 64 10:59:02.082960 IP6 2a00:7d30:edf6:100::10 2a00:7d30:edf6:100::1: ICMP6, echo reply, seq 1, length 64 Thank you for you help best regards Julien

    Read the article

  • How does IPv6 subnetting work and how does it differ from IPv4 subnetting?

    - by Michael Hampton
    This is a Canonical Question about IPv6 Subnetting. Related: How does IPv4 Subnetting Work? I know a lot about IPv4 Subnetting, and as I prepare to (deploy|work on) an IPv6 network I need to know how much of this knowledge is transferable and what I still need to learn. IPv6 seems at first glance to be much more complex than IPv4. So I would like to know: IPv6 is 128 bits, so why is /64 the smallest recommended subnet for hosts? Related to this: Why is it recommended to use /127 for point to point links between routers, and why was it recommended against in the past? Should I change existing router links to use /127? Why would virtual machines be provisioned with subnets smaller than /64? Are there other situations in which I would use a subnet smaller than /64? Can I map directly from IPv4 subnets to IPv6 subnets? My interfaces have several IPv6 addresses. Must the subnet be the same for all of them? Why do I sometimes see a % rather than a / in an IPv6 address and what does it mean? Am I wasting too many subnets? Aren't we just going to run out again? In what other major ways is IPv6 subnetting different from IPv4 subnetting?

    Read the article

  • Microsoft Office 2003 applications crash on 'Save As' to a network mapped drive

    - by Archit Baweja
    Hey guys, so I'm not sure if it belongs on ServerFault forums so figured I'd ask here first because its a workstation/client side issue. I have a client where we have windows server 2003 setup, with windows xp professional setup on all the workstations. We've setup a 'domain' and all workstations logon to the domain (authenticated by the Windows Domain Controller), and in the logon script we map drives on to each workstation. Everything is working peachy except for one workstation, where when I open a file in excel from a mapped drive, it opens fine, but when I go to hit Save As, the Save As dialog pops and hangs up. I cannot perform any other action in excel. When I try cancel the Save As dialog, excel crashes. The mapped drive opens up fine in Windows Explorer. To further investigate this issue, I created a new blank text document on the network drive in Windows Explorer. I then opened it. Then hit save as, and the Save As dialog opened up fine and it would let me save the document. I repeated the above steps for a word document. However this time the Save As dialog hung/froze again. So I'd imagine its a Microsoft Office Issue. Any ideas?

    Read the article

  • Can I disable chrome's auto translate function for visitors to a given page on my site?

    - by Drew Noakes
    I know how to disable this feature for pages that I visit, but what I'm looking for is a way to tell other user's Chrome browsers not to offer translation a particular page on my site. Is there some kind of meta tag I can use? Alternatively, can I indicate that a particular element on the page should not be translated? Reason: The controls which slide down from the top of the page cause my page to resize, which changes the content, which makes the control slide up, which resizes the page, which changes the content, which causes the controls to slide back in. Rinse and repeat. The page dances. The page itself is a map, and the content it wants to translate are all proper names and shouldn't be translated anyway. If alternate names exist in other languages, I provide them myself. Generally I'm against taking away features from the browser that users might like, but in this case it really makes sense. So please don't answer saying that I shouldn't be doing this. I've weighed up the options.

    Read the article

  • Server Bash Line Wrapping Over Text & In Wrong Place

    - by Pez Cuckow
    This is quite a hard problem to explain, when connecting to one of my servers using the bash shell, under any user the line wrapping is broken and has all sorts of problems. Once of which I detail in screenshots below: Other problems I experience include nano getting very confused about which line and or letter I am on, as shown by typing the same message into nano: These problems only occur when connecting as I previously mentioned to one of my servers which runs CentOs. Do you know why this is occurring and what I can do to fix it? On other servers the message works fine! Thanks for your time, Output of requested commands: Server that doesn't work properly: Working server: Could it perhaps be the custom prompt on the non working server? In .bashrc PS1='\e[1;32m\u@\h\e[m:\e[1;34m\w\e[m$ ' Commenting this out appeared to resolve the problem. Google says line wrapping errors can occur if you don't conform to these rules use the \[ escape to begin a sequence of non-printing characters, and the \] escape to signal the end of such a sequence I am not sure where this would fit in on my prompt?

    Read the article

  • Refresh file access time under Linux / Discard disk read cache

    - by calandoa
    I am making use of the access time to analyse some build process, but it is not working the way I want: the access time is updated the first time I read the file, then it stays the same for a long while, or until the next reboot. For instance: $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 10:03 some_file $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time is updated # waiting a few minutes... $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time has not been updated :( I suppose that the file is buffered by Linux in the free memory, the only this copy is accessed the subsequent times for speed reasons. A solution would be to discard the buffers in memory. After searching some forums, I found: sync echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches But it is not working, it seems that it only sync up the write buffers, not the read ones. May be it is due to some custom kernel configuration on my distro (fedora 9)? Or I am missing something here? Is there a way to achieve this access time refresh? Note also that I do not want to simulate some writes on my entire file tree. Because I am using some makefile based build system, this will cause the entire project to be build again.

    Read the article

  • Slow Citrix connection related to mapped network drives

    - by George
    I have this weird issue with Citrix being slow and maybe users just being a little dramatic, but I am curious as to why that happens. Let me give you a little bit of a background. Citrix is running off of Windows 2003 server, TSprofiles and file server were located on the same server, until recently. We have moved our file server over to a new server with tons of space. We have Citrix on one server, TSprofiles on another and file server on third. We are using logon scripts to map home drives, shared drive and etc. Now, up until we made the file server move, the logon process took several seconds and most users couldn't even notice logon script being executed as they logon. Now, it takes upwards of several minutes and users can see logon script being executed at a slow pace, one line at a time. The only new variable in this whole scenario is the new file server. All the servers are physically located in the same location and on the same subnet. So, I guess my question is, if anyone can explain why a sudden sluggishness? And any tools I can use to troubleshoot the issue? Thanks!

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • ServerName not working in Apache2 and Ubuntu

    - by CreativeNotice
    Setting up a dev LAMP server and I wish to allow dynamic subdomains, aka ted.servername.com, bob.servername.com. Here's my sites-active file <VirtualHost *:80> # Admin Email, Server Name, Aliases ServerAdmin [email protected] ServerName happyslice.net ServerAlias *.happyslice.net # Index file and Document Root DirectoryIndex index.html DocumentRoot /home/sysadmin/public_html/happyslice.net/public # Custom Log file locations LogLevel warn ErrorLog /home/sysadmin/public_html/happyslice.net/log/error.log CustomLog /home/sysadmin/public_html/happyslice.net/log/access.log combined And here's the output from sudo apache2ctl -S VirtualHost configuration: wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server happyslice.net (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost happyslice.net (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost happyslice.net (/etc/apache2/sites-enabled/happyslice.net:5) Syntax OK The server hostname is srv.happyslice.net. As you can see from apache2ctl when I use happyslice.net I get the default virtual host, when I use a subdomain, I get the happyslice.net host. So the later is working how I want, but the main url does not. I've tried all kinds of variations here, but it appears that ServerName just isn't being tied to the correct location. Thoughts? I'm stumped. FYI, I'm running Apache2.1 and Ubuntu 10.04 LTS

    Read the article

  • Getting Grub2 to recognize a Raid 10 boot/root

    - by xenoterracide
    I've been trying to get my raid to boot from grub2 for about 2 days now and I don't seem to be getting closer. The problem appears to be that it doesn't recognize my raid at all. It doesn't see (md0) etc. I'm not sure Why or how to change this. I'm using mdadm, 2 device (essentially a raid1) raid10,f2, which is currently degraded. I have tried adding the raid and mdraid modules with grub install along with others. I've tried several variation on grub-install such as grub-install --debug --no-floppy --modules="biosdisk part_msdos chain raid mdraid ext2 linux search ata normal" /dev/md0 I've been searching the net for an answer to what I haven't done but no luck. On my other drive which I plan on removing the raid is initialized and mounted fine on boot, but it's not the boot/root for that setup. My grub.cfg isn't recognized by grub since it can't read the raid partition so I'm not posting that. md0 is not listed in my /boot/grub/device.map.

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >