Search Results

Search found 93271 results on 3731 pages for 'server manager'.

Page 726/3731 | < Previous Page | 722 723 724 725 726 727 728 729 730 731 732 733  | Next Page >

  • On Windows 2008 R2, how do I back up DHCP if the DHCP .mdb database is always busy?

    - by johnny
    I get this from my backup software. C:\WINDOWS\system32\dhcp\dhcp.mdb : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\j50tmp.log : The process cannot access the file because it is being used by another process. C:\WINDOWS\system32\dhcp\tmp.edb : The process cannot access the file because it is being used by another process. My questions: Should I be doing a manual backup of DHCP via command line tools or maybe with MMC, Action, Backup before I run my backup? Is the %SystemRoot%\System32\DHCP\Backup directory always kept up to date? (which does get backed up by backup software) I'm answering my own question but the registry key is set up for 3c, 60 minutes, I believe. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters\BackupInterva This is not the included backup software for Windows. It is another product, but I have seen this with every backup software I've ever used.

    Read the article

  • How do I get the IP Adress of my vpn server

    - by kashif
    I Connect to internet using PPTP connection type from my computer using following setting internet address: blue.connect.net.pk user id: myusername password: mypassword my problem: my dwr-112 router doesn't support internet address name, it rather supports only ip address of the server i.e I'm not able to type blue.connect.net.pk as it only supports server's ip adress. my question: How can I know the ip address of vpn server so that I can configure my dwr-112 router to connect to internet using pptp connection type

    Read the article

  • Assign fixed IP address via DHCP by DNS lookup

    - by Janoszen
    Preface I'm building a virtualization environment with Ubuntu 14.04 and LXC. I don't want to write my own template since the upgrade from 12.04 to 14.04 has shown that backwards compatibility is not guaranteed. Therefore I'm deploying my virtual machines via lxc-create, using the default Ubuntu template. The DNS for the servers is provided by Amazon Route 53, so no local DNS server is needed. I also use Puppet to configure my servers, so I want to keep the manual effort on the deployment minimal. Now, the default Ubuntu template assigns IP addresses via DHCP. Therefore, I need a local DHCP server to assign IP addresses to the nodes, so I can SSH into them and get Puppet running. Since Puppet requires a proper DNS setup, assigning temporary IP addresses is not an option, the client needs to get the right hostname and IP address from the start. Question What DHCP server do I use and how do I get it to assign the IP address based only on the host-name DHCP option by performing a DNS lookup on that very host name? What I've tried I tried to make it work using the ISC DHCP server, however, the manual clearly states: Please be aware that only the dhcp-client-identifier option and the hardware address can be used to match a host declaration, or the host-identifier option parameter for DHCPv6 servers. For example, it is not possible to match a host declaration to a host-name option. This is because the host-name option cannot be guaranteed to be unique for any given client, whereas both the hardware address and dhcp-client-identifier option are at least theoretically guaranteed to be unique to a given client. I also tried to create a class that matches the hostname like this: class "my-client-name" { match if option host-name = "my-client-name"; fixed-address my-client-name.my-domain.com; } Unfortunately the fixed-address option is not allowed in class statements. I can replace it with a 1-size pool, which works as expected: subnet 10.103.0.0 netmask 255.255.0.0 { option routers 10.103.1.1; class "my-client-name" { match if option host-name = "my-client-name"; } pool { allow members of "my-client-name"; range 10.103.1.2 10.103.1.2; } } However, this would require me to administer the IP addresses in two places (Amazon Route53 and the DHCP server), which I would prefer not to do. About security Since this is only used in the bootstrapping phase on an internal network and is then replaced by a static network configuration by Puppet, this shouldn't be an issue from a security standpoint. I am, however, aware that the virtual machine bootstraps with "ubuntu:ubuntu" credentials, which I intend to fix once this is running.

    Read the article

  • Windows 2003/IIS6 - Unexpected Error: C0000005

    - by Chirans
    Our event logs are full of these errors Unexpected error. A trappable error (C0000005) occurred in an external object. The script cannot continue running. We have tried in various ways but could not stop this error. After continuous errors, the IIS hangs up. We are also getting another error, Warning: IIS log failed to write entry. Please help me with the solution.

    Read the article

  • Virtual Directory or Application in IIS 7?

    - by user25164
    I am new to Windows 2008 and IIS 7. With the default installation, IIS 7 has a Default Website. For my application, do I create a new Website outside of Default Website or createa Virtual Directory or Application within the Default Website? Can someone explain the differences? Thank you

    Read the article

  • Newly configured MSSQL2008, TIME_WAIT but no ESTABLISHED?

    - by 3molo
    Windows 2008 R2, standard. No firewall locally on it. Newly setup because an old SQL2000 had two disks die (or could it be the raid controller?) at the same time. Luckily, I had fresh backups. The databases have been restored, and SP2 for SQL2008 applied. I can see various hosts trying to establish a session, but the (customer) sites does not work and I don't see the expected established sessions. A wireshark reveals a full three-way handshake. Since it's customer machines connecting, I cannot logon to them and restart application pools.. What on earth could be causing this? No. Time Source Destination Protocol Info 1 0.000000 1.2.5.127 1.2.6.133 TCP desktop-dna > ms-sql-s [SYN] Seq=0 Win=65535 Len=0 MSS=1380 SACK_PERM=1 Frame 1: 62 bytes on wire (496 bits), 62 bytes captured (496 bits) Ethernet II, Src: Cisco_31:5e:09 (00:26:0b:31:5e:09), Dst: Vmware_b7:00:05 (00:50:56:b7:00:05) Internet Protocol, Src: 1.2.5.127 (1.2.5.127), Dst: 1.2.6.133 (1.2.6.133) Transmission Control Protocol, Src Port: desktop-dna (2763), Dst Port: ms-sql-s (1433), Seq: 0, Len: 0 No. Time Source Destination Protocol Info 2 0.000123 1.2.6.133 1.2.5.127 TCP ms-sql-s > desktop-dna [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=1460 SACK_PERM=1 Frame 2: 62 bytes on wire (496 bits), 62 bytes captured (496 bits) Ethernet II, Src: Vmware_b7:00:05 (00:50:56:b7:00:05), Dst: Cisco_31:5e:09 (00:26:0b:31:5e:09) Internet Protocol, Src: 1.2.6.133 (1.2.6.133), Dst: 1.2.5.127 (1.2.5.127) Transmission Control Protocol, Src Port: ms-sql-s (1433), Dst Port: desktop-dna (2763), Seq: 0, Ack: 1, Len: 0 No. Time Source Destination Protocol Info 3 0.000884 1.2.5.127 1.2.6.133 TCP desktop-dna > ms-sql-s [ACK] Seq=1 Ack=1 Win=65535 Len=0 And netstat TCP 1.2.6.133:1433 1.2.2.98:26895 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.2.98:26912 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.2.98:26918 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.2.98:26931 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.5.127:2736 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.5.127:2737 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.5.127:2738 TIME_WAIT 0 TCP 1.2.6.133:1433 1.2.5.127:2739 TIME_WAIT 0

    Read the article

  • TFS2010 Hangs “Waiting for Build Agent”

    - by Qpirate
    I have asked this question over on SO the link to the question is here but i am hoping this is a better place to ask it. I have 3 VM's each running the TFS Build Host Service 1 has 1 controller and 1 agent 2 have 2 Build Agents each. Most of the time (7\10 builds) it comes back with the following error message TF215097: An error occurred while initializing a build for build definition BUILD_DEFINITION: There was no endpoint listening at http://MACHINE1:9191/Build/v3.0/Services/Controller/14 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. and there is no errors when i do get this message. the following is the config file that i have created <configuration> <appSettings> <add key="traceWriter" value="true"/> </appSettings> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> <add name="API" value="4"/> <add name="Authentication" value="4"/> <add name="Authorization" value="4"/> <add name="Database" value="4"/> <add name="General" value="4"/> <add name="traceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> I do have my own custom activities in my build process but this does not seem to be a problem as sometimes the build actually does go. I have tried refreshing the template as some sites suggest. Has anyone come across a solution for this problem? or can anyone tell me how to catch these errors when they happen?

    Read the article

  • Replicating EFS encrypted files

    - by floyd
    Recently I attempted to configure Microsoft's DFSR on Windows 2008 R2 to replicate a folder which was encrypted with EFS. The setup gave no errors or warnings, but later I read that DFSR does not support EFS in any way. There were also event logs in the DFSR event log indicating an encrypted file was found and wont be replicated. http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_052 My question is, are there any tools that would allow this to occur? Software based preferably.This would be replicating over LAN to destination node for two servers on the same domain.

    Read the article

  • Run Wave Trusted Drive Manager from a bootable CD, recover crashed enrypted SSD?

    - by TigerInCanada
    Is there a way to run Wave Trusted Drive Manager from a live-cd to access a non-bootable SSD with Full Disk Encyption hard disk? http://www.wave.com/products/tdm.asp The crashed disk is a Samsung SSD PB22-JS3, 128Gb. Is has bad blocks at 128-block intervals. If the SSD password could be unset, is sending the unit for disaster recovery possible? What might cause a nearly new SSD to crash in this way, and what is the probability of it happening again? We have other units in service an I can do without every laptop disk in the company crashing...

    Read the article

  • What's the advantage of OpenVPN over SSTP?

    - by Jose
    If considering Windows only environment, what's the advantage of introducing OpenVPN as the company VPN service, instead of Windows built-in protocols? Especially the new SSTP protocol already overcome the one of the weakness of PPTP, which may not go over firewall/NAT. I'm wondering is there any reason not to use Windows integrated solution. The strength of the security can be an issue but I'm not sure how different they are (I know MS VPN was vulnerable but is it still?) Thanks.

    Read the article

  • How do I set "MaxPermSize" for Atlassian Fisheye/Crucible running as service on Win2k3?

    - by Jeremy
    I have been trying to setup Atlassian Fisheye/Crucible as a service on Win 2K3 R2 for two weeks. I keep getting various "java.lang.OutOfMemoryError: PermGen space" errors, which crash Fisheye and force me to restart the service. I've followed the example on the Atlassian support site to configure MaxPermSize within the service wrapper. However, when I check SysInfo inside the Fisheye Admin pages and the debug log, I don't see any confirmation. The Java Heap info is in both places, so I'd expect the MaxPermSize setting to show up in both places. The error is persisting and Atlassian support has been little help. I appreciate any help.

    Read the article

  • What can cause a kernel hang on redhat 4?

    - by Ivan Buttinoni
    I've to solve a nasty problem on a ten machine "cluster": randomly one of these machine hang during an hard computation, sometime still ping sometime not. The problem was described me at the phone, I've still no touch/see these machine, so I can't be more precise. It seem there's no (real) keyboard or monitor linked to them, so I haven't nothing about keyboard led or messages on monitor. Don't worry, what I really need is some suggestion where to search the problem, some suggestions on what can cause a kernel hang on a working machine. I also see this post, but seem same need on a different situation. My ideas since now: - HW problem (ram, cpu, fan etc.) - bad autofs configuration - bad nfs(?) configuration - presence of a trojan/hacker/etc - /dev/"swap" linked to /dev/zero - kernel out of memory(??) - kernel bugged In other words I try to imagine what kind of envent can occour that can crash the kernel insted of the application that generate the event. What hang have YOU experienced before? Write it to me! TIA

    Read the article

  • Roaming Profiles & Redirected Folders - storage consumption? offline files and caching?

    - by Ben Swinburne
    I understand the concepts of both roaming profiles and folder redirection and have used both separately before. I am about to set up a network from scratch and would ideally like to use both for the following reasons primarily Roaming profiles allow users to log on to any machine and have their profile Redirected profiles allow users to have their My Documents and Desktop etc backed up without the need to log off at the end of the day. The servers can run their backups overnight and there are no missing files due to the user not logging off. Redirected profiles largely alleviate the slow log in times caused by large profiles. My question is if some of the folders are redirected and therefore not part of the roaming profile what happens on machines which truly roam (i.e. laptops)? If there's offline files or a cache does this mean that the problem whereby a user has to log off comes back? By having them both enabled, is there any duplication i.e. if I have a users$ share and a profiles$ share would I have Desktop twice for example?

    Read the article

  • Simplest way to shrink transaction log files on a mirrored production database

    - by MGOwen
    What's the simplest way to shrink transaction log file on a mirrored production database? I have to, as my disk space is running out. I will make a full database backup before I do this, so I don't need to keep anything from the transaction log (right? I have daily full database backup, probably never need point-in-time restore, though I'll keep the option open if I can - that's all the .ldf is really for, correct?). (Hope this isn't an exact duplicate, I read a lot of questions but couldn't find this exact scenario elsewhere).

    Read the article

  • Firewall GPO not applying despite being enumerated by gpresult

    - by jshin47
    I have a need to open up the admin$ share on all of my domain's client PC's and I am trying to do so using group policy. I defined computer policy for Windows Firewall with Advanced Security in a policy object linked to the appropriate container and added the appropriate rules. However, they are not being applied! I feel like I have tried all of the obvious steps: I've checked gpresult and the resulting set of policy is the way that I would expect it to look. I've gpupdate /force and gpupdate /sync on a few client computers, but no matter what I do they don't seem to respond to my changes. I know that other computer policies in the GPO are being applied so it is strange that these are not. I have also disabled exceptions on clients in the firewall GPO, but that doesn't seem to be applying either. Here is a screenshot of the firewall.cpl from a client: Basically, although other options in the same GPO ARE applied for computer policy, the firewall settings seem to be ignored.

    Read the article

  • Setting up Kerberos SSO in Windows 2008 network

    - by Arturs Licis
    We recently introduced Kerberos (SPNEGO) Single Sign-on in our web-portal, and tested it on a Windows network with Windows 2003 domain controller. Now, trying to test it on Windows 2008 R2 controlled network, SSO just doesn't work due to defective tokens. Up to the moment I was pretty sure that there's something wrong about environment and that were NTLM tokens. We double checked IE settings etc, but nothing helped. Then we checked the following settings for both users (logged on a client test-machine, and the one used as a Service Principal): This account supports Kerberos AES 128 bit encryption. This account supports Kerberos AES 256 bit encryption. .. and error message changed to ' GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256CTS mode with HMAC SHA1-96 is not supported/enabled) It makes me think that Internet Explorer receives Kerberos tokens at all times, and there's just some configuration missing, or it was ktpass.exe to be incorrectly executed. Here's how ktpass.exe was invoked: C:ktpass /out portal1.keytab /mapuser USER /princ HTTP/[email protected] /pass *

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • DFS "clobering" files

    - by Badger
    We have DFS setup using the DFS Management Administrator Tool. I turned on replication in the Distributed File System Administrator Tool as well and this morning we lost tons of files from that share. Please explain to me why this was wrong and if there is anything that can be done to repair it. (No, we don't have backups. We have some shadow copies, but those were deleted as well. We have been using DFS as its own backup)

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Windows 2008 additional disk going offline with reboots on Amazon EC2

    - by Ernest Mueller
    OK, so I took the stock Windows 2008 64-bit Amazon AMI and wanted to add a D: drive for page file space and crash dumps. I launched the instance with a second EBS volume attached as xvdf and went into Disk Management set it online, and added the page file and crash dump settings and all that works. But when I reboot, the box comes back up with that second drive as "Offline." How do I get that disk to automatically come online on reboot (or most notably, when I turn this into an AMI and launch more instances off it - I've tried that too and same deal with the D:).

    Read the article

< Previous Page | 722 723 724 725 726 727 728 729 730 731 732 733  | Next Page >