Search Results

Search found 21920 results on 877 pages for 'custom properties'.

Page 683/877 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • Server Names Inside Private Network

    - by thyandrecardoso
    Our office has a private network, where any requests on a (pre-determined) public IP are forwarded to a private IP inside said network. On that private IP, we've got a server running several services, including HTTP servers, and SCM systems. We only control our private network, having no control on the public IP configuration. We bought a domain name, and pointed it to that public IP, so people can access our services from the outside. But, when inside the office, people can't use that DNS name, because the server and any other hosts inside the network share the same public IP! For desktops, inside the office network, dealing with names is really easy: one entry on the hosts file and we're done. However, for laptops, that keep going in and out, and need to access services inside the office, the naming is really annoying. I don't know the "standard" process for dealing with these kind of situations. I've considered installing BIND in the office, and make people configure their wireless and wired connections to use that DNS server. What is the correct approach in this situation? If using BIND (or any other DNS server) is the answer, how should I configure it so that people inside the office can use it to get our custom names, and get forwarded to the ISP DNS when trying to reach the internet?

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem.

    Read the article

  • Everyone can access my Windows 7 Homegroup file shares - Even Windows XP computers

    - by Adrian Grigore
    I have 3 computers in my network, two running Windows 7 and one running Windows XP. I've set up a homegroup on both Windows 7 computers. Also, all computers are in the same Workgroup. The problem is that one of the Windows 7 computers makes all shares accessible to the entire Workgroup instead of just sharing to the Homegroup as it should be. I created the file share in Windows 7 via right-click in the explorer, then click on "Share For" - "Homegroup (Read/Write)" (translated from German, so the actual wording may be different). Also, when I look at the file sharing properties of that folder, Windows Explorer informs me that Users must have a valid account and password for this Computer to access drive shares. Unfortunately this is not true. Being in the same Workgroup is enough to get access. Homegroup restrictions work as expected on my other Windows 7 computer. When trying to browse those shares from the XP computer, I get a dialog asking for a login and password. What might cause homegroup restrictions to fail and how can I fix this?

    Read the article

  • Windows 7 autostarting apps as administrator

    - by Fujishiro
    Hello everyone. The question is easy to answer I guess. (Tried to search here, didn't find an answer.) So. The deal: There is a bug in OpenOffice (haha ..just one? :)), which prevents the spellcheck. You have to start it with right-click , run as administrator to make it work. Tried also setting this at the 'properties', but it didn't work. But 'quickstart.exe' would be also enough to make this work. (OpenOffice's quickstarter). So I'd like to run it at boot, as an admin, like I'd do with right-click. How to do THAT? (Actually there is a different bug for spellcheck on Win7. One have to run oowriter.exe as an admin, and then install the extension BY HAND from Program Files...\share...*.oxt. And THEN it'll work IF you run the app as an admin. (I'll buy SoftMaker's office as soon as the hu spellcheck arrives, but until then I have to make this work. Thanks for the answers in advance.))

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP.

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Windows 7 deployment thru WDS

    - by vn
    Hello, I am deploying new systems on my network and I built my reference computer by installing the OS the manufacturers (Dell and a custom built system from some local business) gave with all drivers, installed all the desired applications. As for the settings part, I'm doing most of it thru GPOs. I want to image my reference computer and deploy it with WDS. i found several links on how to sysprep, but they're all doing it with some differences without explaining them. My questions : How do I manage (into sysprep) the domain join/computer naming part since (from what I understand) WDS manages that? How do I know/determine what I need to setup into my sysprep.xml? Can you sysprep a first time, try and if it fails, do some modifications and try again? I am thinking of doing a basis sysprep, checking what info can be automated and correct that in the answer file. What do I miss if skipping the "audit" mode? I don't plan on re-doing the reference computer... I read that when sysprepping, it resets settings from the reference computer like the computer name, activation/key and such... what setting is sysprep resetting by default that I should be aware of? I must admit I am quite lost about Win7, sysprep, RIS, MDI toolkit, WDS.. I understand the way of doing with XP, but it changed so much with Windows 7! The links I am reading are : http://far2paranoid.wordpress.com/2007/12/05/prep-for-sysprep/ http://blog.brianleejackson.com/sysprep-a-windows-7-machine-%E2%80%93-start-to-finish-v2 http://www.ehow.com/print/how_5392616_sysprep-machine-start-finish-v2.html Thank you VERY much for any answers, they are much appreciated.

    Read the article

  • Intermittent PHP error: Undefined function <core function>

    - by Daniel
    In the last week I've been coming across an incredibly annoying error on one of Slicehost slices. It appears that every now and then PHP will fail with a fatal error, saying a certain function is undefined. The function changes, but is always a core PHP function e.g. defined(), version_compare(), etc. This problem has occurred while using several different PHP applications - PHPMyAdmin, my own custom built apps, etc, leading me to believe that the problem is not specific to the running code. Here are some details: - Debian Lenny - Apache 2.2.9 - PHP 5.2.6-1+lenny4 with Suhosin-Patch (running eAccelerator 0.9.6) Apache and PHP are installed from Debian packages. Error logs show nothing out of the ordinary. I thought memory might be an issue, but free -m reports upwards of 100MB free almost all the time. Another thing I'm trying to investigate is if the problem might be related to eAccelerator, but testing this theory out is incredibly hard because the issue doesn't appear very often and I've been using eAccelerator for months on this install without any problems up until now. Has anyone ever come across anything like this? Why would PHP report undefined core functions?

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you.

    Read the article

  • wrt54gl reboots; troubleshooting steps?

    - by Bill
    I am using about 10 wrt54gl's in a small school. I am using a combination of stock firmware and Tomato 1.25, slowly moving towards all Tomato. We have had these devices installed for several years without problems. Recently, more and more of the units have started to spontaneously reboot, usually during high-traffic times (but not always). For the most part, the rebooting is not critical for us, but the wrt54gl's temporarily revert to 192.168.1.1 on the LAN ethernet ports and conflict with a critical server that's already installed with that IP. (Yes -- we plan to move the server off that address, but it is an involved process.) Both Tomato and the stock firmware (several versions from recent to several years old) exhibit the same problem: random reboots and reverting to 192.168.1.1 and conflicting temporarily with our server until the firmware boot process finishes. Here are my questions: Any way to prevent the wrt54gl's from reverting to 192.168.1.1 during the boot process? I was thinking of doing a custom firmware mod, although I hate to go that direction. Any steps to take in troubleshooting the reboots? Only some of the wrt54gl's reboot, which is odd. Others stay online for weeks and months without issues. Thanks.

    Read the article

  • Replacing stock Core 2 Duo heatsink fan (just the fan really) with a Dell CPU fan

    - by user647345
    My old heatsink fan broke and I'm trying to reconnect its plugs to a new fan. My Dell CPU fan has some custom Dell plug. I snipped the old fan's wire in half and kept the plug on the end of it. I want to connect it to the Dell fan wire to the plug. The motherboard is a P5Q-e, the stock Core 2 Duo fan was .20A and the dell is .70A. Is that going to matter? The wire from the fan has four wires, the wire with the plug has four wires. They share three, of the four colors: red, black, and blue. Dell's fourth wire is white, while the plug's fourth wire is yellow. Is it safe to assume that I just connect the yellow and the white plug together and match the rest up? I don't want to take any risk of damaging anything. It runs fine passively without a fan, but I have speedstep on, so I would like to use this fan and just fasten it to the heatsink with some twist ties and paperclips and call it a day.

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • Network Traffic Log

    - by Chris Becke
    Background - On my "home" network I have a Linksys WTR45GL router providing my internet access as well as a wireless AP. Connected I have * 2 Windows PCs (wired) * At least one laptop (Wired) * Some 802.11 enabled handheld consoles (PSPs) * A Nintendo Wii * Some windows XP pcs used by the people in the granny flat. Where I live, South Africa, well, 1Gb worth of monthly cap is, while not expensive, costly enough that I'd like to be sure that all the bandwidth used by devices on my network is ... well ... legitimate and not the result of neighbors parasiting my wireless, malware or just the result of "liberal" download policies in my software. I got the Linksys WRT45GL on the understanding that there were custom firmwares (DD-WRT and Tomato) that allowed bandwidth tracking, but there doesn't seem to be any facility to get a log of traffic that can be examined to see (a) which local devices were the biggest consumers of bandwidth and (b) what they were connected to. What tools are there for logging traffic such that, when it gets to that OMG moment in the month when all my bandwidth is gone, I have a chance to find out what the hell used it all up (and hopefully attempt some corrective action).

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • running red5 on port 80

    - by ArneLovius
    I have a red5 application http://code.google.com/p/openmeetings that runs under red5, and is accessible on port 5080 and 8443 I've installed it on Ubuntu 10.04 The eventual aim is to have it accessible via https on 443 instead of 8443, but I thought I would initially try on 80 so that any issues were just down to the port configuration and not SSL certificates. I've tried changing the port from 5080 to 80 in the red5.properties file, but it fails to start. In the red5.log I have seen ERROR o.a.coyote.http11.Http11Protocol - Error initializing endpoint java.net.BindException: Permission denied /0.0.0.0:80 In the error.log I have seen ERROR o.a.coyote.http11.Http11Protocol - Error initializing endpoint java.net.BindException: Permission denied /0.0.0.0:80 and ERROR org.red5.server.tomcat.TomcatLoader - Error loading tomcat, unable to bind connector. You may not have permission to use the selected port org.apache.catalina.LifecycleException: Protocol handler initialization failed: java.net.BindException: Permission denied /0.0.0.0:80 There is nothing else installed or running on port 80, so I presume that this is a "needs to be root" situation. I would rather not run an Internet accessible web service as root. I know that Tomcat can run on port 80 by changing “#AUTHBIND=no” to “AUTHBIND=yes” in /etc/default/tomcat6 but I have not been able to find anything similar for red5. Am I on a hiding to nothing, or is there better way than running as root ? Thanks!

    Read the article

  • How to disable auto insert notification in Windows 7?

    - by White Phoenix
    Alright, here's the problem. My hard drive activity light on my custom built PC is blinking exactly once every second. Microsoft has this to say on the issue: http://support.microsoft.com/kb/138598 There has been discussion on this issue several months ago: Why does my hard drive LED light blink every second? The problem seems to stem from primarily Windows 7 polling the CD-ROM/DVD drive every second to see if something is inserted. The Windows 7 users in the thread that was linked in the superuser question, https://social.technet.microsoft.com/Forums/fi-FI/w7itprohardware/thread/4f6f63b3-4b58-4154-9298-1566100f9d00, have confirmed that this IS a known issue with Windows 7. Some people point at the motherboard circuitry causing the CD-ROM and SATA activity to both be linked to that hard drive activity, but whatever the case, the temporary solution seems to be to disable the CD/DVD-ROM drive in Device Manager. In fact, disabling the CD/DVD-ROM does stop the blinking, but of course this solution is counterproductive, because I shouldn't have to entirely disable a device to fix this problem. I've done the following suggestions in that thread: Change the autorun registry entry to 0 Completely disable autoplay in the autoplay control panel Disable autoplay in the Local Group Policy Editor. None of these stop the blinking from happening - apparently these solutions work for both XP and Vista, but it seems to be different in Windows 7. So I'm wondering if anyone has found out how to completely disable the polling in Windows 7, or if this will just have to be an issue we will have to deal with. There's no option to disable the auto insert notification when you go to the device within device manager (there was in XP), so I got no idea where this option is hidden, or if there's a registry key entry I could change to stop the polling. Anyone have any idea?

    Read the article

  • How to edit a read-only document in LibreOffice?

    - by TestUser16418
    I need to fill a form (which I received in .doc format and saved as .odt). The file is read-only except for the fields where I can enter the information. Unfortunately, with the fields filled it doesn't fit on one page, and I need to edit it so I can print and submit it. With LibreOffice beta 3, I could edit anything outside of the fields, and the fonts were slightly smaller, so it fit on the page even with the fields filled. Today I upgraded LibreOffice, and when I opened to edit a field where I had a mistake, it no longer fits on the page, and I can't edit it. When I opened the properties it says that the document is NOT read-only, but it is. When I try to delete text it tells me that I can't edit the read-only content. Can anyone give me some advice, because I've been trying to print my form for 2 hours already. I tried AbiWord and KWord, but both are missing elements from the page (though the forms fit). I can also edit the margins (Format - Page is dimmed, but when I begin to edit a field it's no longer dimmed)med

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • "The zone can be scavenged after" keeps incrementing

    - by kce
    What are you trying to do? I'm trying to enable DNS scavenging on a DNS zone that has about a hundred stale DNS records. What have you tried in order to make it happen? I setup DNS Scavenging per everyone's favorite TechNet Blog post: Don't be afraid of DNS Scavenging. Just be patient. I first disabled scavenging on all of our domain controllers: DNSCmd . /ZoneResetScavengeServers contoso.com 192.168.1.1 192.168.1.2 I then enabled automatic scavenging on the DNS zone: I then enabled DNS scavenging on one of the domain controllers: I then found a few records that I expected to get delete with timstamps from a few years ago and ensured that that the Delete this record when it becomes stale and that time stamp was actually set: Finally I reloaded the zone and waited 14 days (the sum of the Refresh + No-Refresh periods). What results did you expect? I expected to see a 2501 Event in the DNS server logs noting the deletion of a bunch of DNS records. What actually happened? Nothing happened. The Zone Aging/Scavenging Properties showed that the zone could be scavenged after 6/12/2014 10:00:00 AM last week. No 2501/2502 events were recorded. All of the records with "aged" time stamps are still present. The date at which the zone can be scavenged after incremented another seven days to ?6/?18/?2014 10:00:00 AM. As I understand it until that date stays at least 14 days in the past nothing will ever even be eligible for scavenging let alone actually be scavenged. The only 2501 events recorded in the event logs are ones that I have triggered by right clicking and selecting "Scavenge Stale Resource Records". They note that scavenging will try to run again in 168 hours which was this morning. I have DNS scavenging enabled for a few months and have waited patiently for something to happen. I have reloaded the zone multiple times (which resets this timestamp). What am I missing here?

    Read the article

  • Fixing corrupt files or corrupt file table on a USB drive?

    - by Kelsey
    I was doing a virus scan on an external USB drive while copying data over to it. While AVG was scanning my system got locked up I think due to the USB drive running out of space and it required a reboot. Since that time all data on the external drive is no longer accessible. I can see all the files in the root and directories but I cannot browse into any of them as Windows 7 gives an error stating they are corrupt. I think the file table or whatever it uses to store the index of what exists on the drive has been corrupted since it still shows the the drive as being almost full but everything I do a properties check on says it is 0 bytes. Does anyone know how to 'unlock' or recover this data? Is there a way to rebuild the file table somehow? Luckily I can recover this data from other sources as a last resort but I would like to fix this if possible. Any help would be appreciated. Thanks.

    Read the article

  • Desktop.ini Issues/Confusion

    - by EpicDavi
    BACKSTORY: I was out of town for a while and I forgot to turn my computer off. When I came back I saw that a desktop.ini file was on my desktop (using Windows 7). I thought that was odd because I knew it was a system file and it usually didn't show up due to the fact that I had disable the feature to show system files. Also it wasn't translucent like the other system files. I went to my control panel and saw that the "Hide protected operating system files" was indeed enabled. This puzzled me so I disabled the setting and another one was on my desktop like it usually is hidden. So now I have to desktop.ini files on my desktop: one hidden and one not hidden. I am doing an antivirus check to see if anything was going on and I will give an update soon. I am pretty sure these files are harmless and could be deleted but I would rather get another person's opinion on the subject. Thanks! UPDATE: I did an anti-virus scan and it seems I have no problems. It is odd because the file seems to maintain system file properties such as not being able to be edited and other things. Also I have tried restarting my computer and it is still not hidden. So the question remains: What should I do with the file and what caused it?

    Read the article

  • ISA 2006 SP1 - SSL Client Certificate Authentication in Workgroup Environment

    - by JoshODBrown
    We have an IIS6 website that was previously published using an ISA 2006 SP1 standard server publishing rule. In IIS we had required a client certificate be provided before the website could be accessed... this all worked fine and dandy. Now we wish to use a web publishing rule on ISA 2006 SP1 for this same website. However, it seems the client certificate doesn't get processed now, so of course the user can't access the website. I've read a few articles stating the CA for the certificate needs to be installed in the trusted root certificate authorities store on the ISA Server (i have done this), as well as installing the client certificate on the ISA Server (done as well). I have also verified that the ISA Server is able to access the CRL for our CA no problem... In the listener properties for the web publishing rule, under Authentication, and Client Authentication Method, there is an option for SSL Client Certificate Authentication... i select this, but it appears the only Authentication Validation Method selectable is Windows (Active Directory).... there is no Active Directory in this environment. When i configure the rule with the defaults, I then try to hit my website and it prompts for my certificate, i choose it and hit ok... then I'm given the following error Error Code: 500 Internal Server Error. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202) I check the event logs on the ISA Server and in Security Logs, i see Event ID 536, Failure Aud. The reason: The NetLogon component is not active. I think this is pretty obvious since there is no active directory available. Is there a way to make this web publishing rule work using client certificates in this workgroup environment? Any suggestions or links to helpful documents would be greatly appreciated!

    Read the article

  • Port translation in router causing some email to fail

    - by user22037
    We are in the process of setting up a spam filter (SAVASM). One change we are making is to push incoming email on port 25 through our spam filter/server but have users actually send their email on a different port. I am attempting to make this happen by using port address translation to send port 25 traffic to the SAVASM server IP. As a step in making this change I setup port translation without actually changing the IP addresses. The NAT rules for the email server went from one Static NAT rule with no port specified, to multiple Static NAT rules each with a port or group matching the Access Rules for that server (smtp, pop3, http, https, and some other custom ports). The problem we are running into is confusing. Some outgoing mail through this server is failing when the router has the multiple NAT rules with port translation settings. Email goes through fine FROM our email to our internal accounts and to Gmail. However email fails when FROM our client's email address TO our client's email or their personal Comcast. The only situation that worked for them was if they changed FROM to Comcast and then messages went through fine to both Comcast and the client's accounts. Switching back to regular Static NAT rule everything then worked for them. Does anyone have a clue as to what might be going on? We are on a Cisco ASA 5500 box.

    Read the article

  • simple network between xp & 7 with cross cable problem...

    - by LostLord
    hi my dear friends : i have a simple network between xp & 7 windowses with cross cable (2 pc home)... ===================================================================== the one with 7 is mother and have 2 lan device (onboard + pci) A. onboard is like this when u go to tcp/ip v4 properties:(4 adsl internet) obtain an ip... preferred dns server : 81.91.129.67 alternate dns server : 4.2.2.4 shared...no permission 4 change so every thing is ok for internet on windows 7. B. the other lan pci card that is connected to pc with xp is like this : 192.168.2.11 255.255.255.0 0.0.0.0 empty empry computer name : cougar workgroup : nethome homeNetwork is disabled (i think that is 4 2 pc's with 7 os not xp) every thing is off in network options except file & printer sharing in public area ===================================================================== pc with xp os is like this : 192.168.2.12 255.255.255.0 192.168.2.11 (mean gateway) 4.2.2.4 8.8.8.8 computer name : tiger workgroup : nethome ===================================================================== at last my little net is ok... mean both have internet , both can see each other by their ip (\\192.168.2.11 or \\192.168.2.12) my problem is when in pc with xp type \\cougar it shows an error about network path! but in pc with 7 \\tiger works perfec. what is the problem in system with xp ? in few days ago this network was ok (search by computer name) when both os were xp , so there is no problem with my cable or devices. another problem is i can not find tiger in my network list in 7 pc \ why? is something wrong with my network? thanks 4 future advance best regards

    Read the article

  • Is it ever good to share a userid?

    - by Ladlestein
    On Un*x, Is it ever a good idea to have one userid that many different people log into when they do stuff? Often I'm installing software or something on a Linux or BSD system. I've developed software for 24 years now, so I know how to make the machine do what I want, but I've never had responsibility for maintaining a multi-user installation where anyone really cared about security. So my opinions feel untested. Now I'm at a company where there's a server that many people log into with a single userid and do stuff. I'm installing some software on it. It's not really a public-facing server, and is only accessible via VPN, but it's used by many people nonetheless, to run tests on custom software, things like that. It's a staging server. I'm thinking that at the very least, using a single user obscures an audit trail, and that's bad. And it's just inelegant, because people don't have their own spaces on the server. But then again, with more userids, maybe there's a greater chance that one can be compromised, allowing attackers to gain access. ?

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >