Search Results

Search found 92550 results on 3702 pages for 'server variables'.

Page 614/3702 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • I'm think n+1 in a hyper-v r2 cluster managed by scvmm is not a great idea anymore

    - by tony roth
    Around here the clusters (not hyper-v clusters) are typically configured as n+1, so they are asking me to create a n+1 hyper-v r2 clusters. These will configured with both csv's and live migration and managed via scvmm r2. My thinking is that its a waste in having a node sitting there idle. In my opinion it would be better to have headroom left over for what would traditionally the +1 server spread amongst the N nodes. Anybody have an opinion on this. thanks

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

  • VMware postfix server drops connection

    - by nicoX
    Our physical server godzilla forwards mails to our virtuall VMware server b4. They are on the same net. Often connection drops, we can't ping godzilla with our b4. That means mails from godzilla won't reach b4 and the mails will be in handed into the mailq. Sometimes it takes some hours and the issue will auto fix itself, b4 will wake up and the mail will be delivered. Another thing if we remotely ssh into the b4, the b4 will wake up and and receive any mailq mails from godzilla and deliver them. netadmin@b4:/var/log$ arp -a ? (192.168.209.80) at 00:1E:C9:AE:79:9D [ether] on eth0 root@godzilla:/usr/local/bin# arp -a ? (192.168.209.20) at 00:50:56:91:7d:b2 [ether] on eth0

    Read the article

  • Add "secure" in cookie by httpd server

    - by Abhishek
    How do I have to configure my httpd server to add "Secure" in the cookies? I tried the one in the below link, http://blog.modsecurity.org/2008/12/fixing-both-missing-httponly-and-secure-cookie-flags.html but this did not seem to be working. I inspected the cookie via firebug and found that the cookies have "HttpOnly" but not "Secure". I double checked the configurations and they the same as mentioned in the link. I also noticed that the server response time goes bit high when doing it by mod_security. Is there a better way to do it? Any ideas or pointers to configurations would be helpful

    Read the article

  • REST-based file server

    - by Chris Wenham
    I need to be able to PUT files and GET them later using nothing but HTTP, so I went searching for something that might match the terms "REST file server" or "HTTP file server" or "REST drop-box", etc. Unfortunately, these terms bring up the wrong kind of results on Google. What I want is the equivalent of an SMB fileshare over HTTP. Some ideal features: Can PUT a file of any type at http://servername/service/any/path/I/want/document.pdf Anyone with access can GET that file at the URL I PUT it at Supports AV scanning on any new file that has been PUT Supports DELETE of existing resources (files) Our shop runs Windows, but I'd be interested to know about Unix software that can do this kind of thing, too. It's to be used in an IT department for private users only. It won't be on a public-facing IP address. Does anything like this exist?

    Read the article

  • I'm thinking n+1 in a hyper-v r2 cluster managed by scvmm is not a great idea anymore

    - by tony roth
    Around here the clusters (not hyper-v clusters) are typically configured as n+1, so they are asking me to create a n+1 hyper-v r2 clusters. These will configured with both csv's and live migration and managed via scvmm r2. My thinking is that its a waste in having a node sitting there idle. In my opinion it would be better to have headroom left over for what would traditionally the +1 server spread amongst the N nodes. Anybody have an opinion on this. thanks

    Read the article

  • HTTP Caching Server that supports POST

    - by Jeroen
    I am hosting a REST service which is sending appropriate cache-control headers. I use Varnish as a caching server in front of my webserver. However, a limitation of varnish is that it doesn't support caching HTTP POST and HTTP PUT. Is there any alternate caching server that will be able to cache these requests? I understand that caching POST is a bit tricky because you cannot just cache based on the url as a key like for GET; it needs to actually inspect the request body. In case of multipart/form-data requests, there should probably be a limit on the size of the request body for it to be cached (so that big file uploads, etc won't be cached). Nevertheless I really want to be able to cache short HTTP POST, or at least the application/x-www-form-urlencoded ones.

    Read the article

  • User/Group Policies in Windows 2000 domain controller

    - by Chris
    In Server 2000 active directory, I have 5 groups of users and every user has different policies. The problem is that a different desktop loads for only one specific user no matter what changes I make in administrative templates. If I copy this user profile and paste it into another group with a different name, windows workaround loads as it should, but some policies are not applied. Does anybody know a way to solve this problem instead of creating a new group and user from scratch?

    Read the article

  • Squid on windows loadbalancing only to one server

    - by Martin L.
    After thousands of googles and trying days i cant get the load balancer/failover in squid on windows to work. Iam using squid 2.7. My webservers are 2 single NIC lighttpd and one dual nic lighttpd. server1 in this example is running squid on port 80 and lighttpd on port 8080 (just to test) Requirements: All 3 webservers running lighttpd should be balanced two option for load balancing: Best would be if server1 is busy server2 takes over, if server2 is busy server3 takes over, etc.. Round robin style evenly distributed load. Eg server1 takes first call, server2 second etc.. All requests should be treated the same way (no url rewriting or so on) Sent host headers have to be redirected to every server as http host header, speaking of "server1", "server1.company.internal" and "10.211.1.1". My approach: acl all src all acl manager proto cache_object http_port 80 accel defaultsite=server1.company.internal vhost #reverse proxy entries cache_peer 10.211.2.1 parent 8080 0 no-query originserver round-robin login=PASS name=server1_nic1 cache_peer 10.211.1.2 parent 80 0 no-query originserver round-robin login=PASS name=server2_nic1 cache_peer 10.211.2.3 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic1 cache_peer 10.211.2.4 parent 8080 0 no-query originserver round-robin login=PASS name=server3_nic2 #decl of names of squid host acl registered_name_hostdomain dstdomain server1.company.internal acl registered_name_host dstdomain server1 #ip of squid host acl registered_name_ip dstdomain 10.211.2.1 # access: redirects the correct squid hostname http_access allow registered_name_hostdomain http_access allow registered_name_host http_access allow registered_name_ip http_access deny all cache_peer_access server1_nic1 allow registered_name_hostdomain cache_peer_access server1_nic1 allow registered_name_host cache_peer_access server1_nic1 allow registered_name_ip cache_peer_access server2_nic1 allow registered_name_hostdomain cache_peer_access server2_nic1 allow registered_name_host cache_peer_access server2_nic1 allow registered_name_ip cache_peer_access server3_nic1 allow registered_name_hostdomain cache_peer_access server3_nic1 allow registered_name_host cache_peer_access server3_nic1 allow registered_name_ip cache_peer_access server3_nic2 allow registered_name_hostdomain cache_peer_access server3_nic2 allow registered_name_host cache_peer_access server3_nic2 allow registered_name_ip cache_peer_access server1_nic1 deny all cache_peer_access server2_nic1 deny all cache_peer_access server3_nic1 deny all cache_peer_access server3_nic2 deny all never_direct allow all Problems: Load balancer does not load balance other than to first server. Only if the first server is killed in any way the second will take over. I have seen the others working at some point, but definitely not as the intended load balancing described above. If the cache_peer_access is not defined sometimes the wrong hostname is sent to the backend webserver and this always depends on the defaultsite= parameter. Probably because the host header on the request to squid is not set and its replaced by defaultsite. Leaving out defaultsite didnt solve the problem. The only workaround i found for this is the current approach with cache_peer_access. Questions: Does the cache_peer_access influence the round-robin? Is there a better workaround to pass the host header to the backed webservers? Which parameters do increase the speed of load balancing or does anyone have a better approach? -Martin

    Read the article

  • Access my local server by hostname or servername

    - by S.M.09
    I have a local host server hosting a few applications in tomcat which comes through a apache proxy The client or User trying to access these application has to access them like 10.XXX.XXX.XX:8080/appName OR 10.XXX.XXX.XX/appName But I want to replace the ip address with soem other name related to my applications. But I cannot go and enter the host name of the server in each users /etc/host Nor do I want to be setting up DNS. Is there another way to do this. I am using ProxyPass XXX YYY to redirect all applications of tomcat to port 80

    Read the article

  • Server problem (duh)

    - by j-t-s
    Sorry the title couldn't be more specific. I installed Abyss Web Server. I'm running Windows XP Home Edition and I have Wireless Mobile Broadband internet. I used to be able to access (and other people on other networks) my site by entering my ip address in the browser, but after I formatted, and the installed abyss web server again, this does not work anymore. There are no errors. I CAN visit my own site by entering my ip address BUT anybody else can't do the same, it just says "connecting" in the browser's statusbar and it never changes. I have consulted the docs and have found no help. Google hasn't helped with this problem either. Can somebody please help? Thank you :)

    Read the article

  • Setting up reverse proxy via an HTTP proxy?

    - by billc.cn
    I have a software that has to call an external web API from inside our firewall. Currently the only way to do this is through the HTTP proxy feature of the firewall; however, the software itself does not support proxy configuration. So I am wondering is it possible to setup a reverse proxy for the API that goes through the HTTP proxy? The server is running WS2003. I can install any software on it.

    Read the article

  • High load on a nagios server -- How many service checks for a nagios server is too many?

    - by Josh
    I have a nagios server running Ubuntu with a 2.0 GHz Intel Processor, a RAID10 array, and 400 MB of RAM. It monitors a total of 42 services across 8 hosts, most of which are checked using the check_http plugin even 5 minutes, some every minute. Recently the load on the nagios server has been above 4, often as high as 6. The server also runs cacti, gathering statistics every minute for 6 hosts. I wonder, how many services should hardware like this be able to handle? Is the load so high because I am pushing the limits of the hardware, or should this hardware be able to handle 42 service checks plus cacti? If the hardware is inadequate, should I look to add more RAM, more cores, or faster cores? What hardware / service checks are others running?

    Read the article

  • linux ftp server with virtual users

    - by kjertil
    i know there are already similar questions for this matter but the answers doesn't really make much sense to anyone who is not really technically comfortable in Linux. I've already tried articles like these for example: http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/ with the result of accidently breaking the whole system. The problem is that, while there are several technical possibilities to set up virtual users with a FTP server, it is not as easy as managing for instance a Filezilla server on Windows. I've seen some Web based GUI's but most of them seems to be out of date. The different flavours of Linux and the large amount of different popular FTP servers also seems to make the matter more complicated. I guess my question is, is there a way, to set up virtual FTP users on Linux without the hastle of having to manually edit PAM, MYSQL and config files?

    Read the article

  • Sharepoint and SSRS Native Integration - Not seeing web parts after cabinet file install

    - by Greg_the_Ant
    I followed the steps here to set up Sharepoint integration with reporting services (SSRS) using native mode. (i.e., get a report explorer and viewer web parts) However after adding the cabinet file e.g., STSADM.EXE -o addwppack -filename "C:\ Program Files\Microsoft SQL Server\100\Tools\Reporting Services\SharePoint\RSWebParts.cab" -globalinstall I still don't see report explorer in the web parts list. I'm very stuck and confused :-(

    Read the article

  • mirroring linux server to external usb harddrive

    - by DuPie
    My google-fu must be sucking. i havent been able to find a good solution for the following: numerous Linux server on commodity hardware Trying to do a recovery mirror copy to external harddrives External harddrives are smaller than source harddrives, but larger than data External drives are connected via usb2 (slow) Servers range from 20GB of data to 400GB of data Servers are remote, so hands on access is a pain need to copy boot files. empty external drives currently Basicly, looking for a way to do use a ghosting solution from INSIDE a running linux server to an external harddrive, without booting a cd etc. the rsync/cpio solutions i've looked at dont work great with grub/dev/proc etc. I understand that since the system isnt offline, it wont be a "mirror" image as files change, but thats ok. Are there any free/commercial products that would work?

    Read the article

  • Load a 6 MB binary file in a SQL Server 2005 VARBINARY(MAX) column using ADO/VC++?

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in a VC++ application. This is the code I am using to load the file which I used to load a .bmp file: BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData->AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("***Unhandle Exception***"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk function it gives me all 0s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated.

    Read the article

  • IIS site not resolved by DNS server

    - by Alexio
    I've a problem with my network. My Config: SVR01 (ws2008r2): DNS server and Domain Server SVR02 (ws2008r2): IIS (v7.5) Domain Clients In SVR02 there are many sites. When I try to connect on one site (such as: demo1.myhomesite.intra) the browser say "Internet Explor can not display the webpage". Well... In this case I found a workaround: unplug and plug the lan cable (or disable and enable wifi) in my client and the site work correctly. But this workaround is not a solution. Please, can you help me??

    Read the article

  • What's port 1283?

    - by kbluck
    I see a lot of connection attempts to 1283/tcp on my firewall from a client computer to a Windows Server 2008 Domain Controller. What exactly is this traffic? Something to do with NetBIOS, perhaps?

    Read the article

  • Public Folder not Visable from Front End Exchange 2003 Server

    - by Kyle Brandt
    I have a public folder that does not receive emails when the emails are sent via the Front-End Exchange Servers. When I go into the System Manager on the Front-End I don't see this particular public folder listed under the Public Folders. However, I do see it listed from the Front-End server. I see the emails that are not making it to the public folder listed in the local delivery queue on the front end server and they are in a retry state. Does anyone know how I might troubleshoot this?

    Read the article

  • Directly printing to remote CUPS/IPP server on Snow Leopard

    - by Martin v. Löwis
    I need to use Kerberos authentication when printing from my OSX machine, however, the machine itself does not have a service account in active directory, so the KDC will not issue a delegation ticket for the local CUPS installation. I think printing could work if the printing framework would directly print to the network CUPS server (or even to the Windows print server), bypassing the local CUPS. Is it possible to setup printing so that it directly accesses the remote print server? (asking for a service ticket for that server would succeed)

    Read the article

  • hMailserver: Secure SMTP SetUP - Trusted Cert Issue

    - by Peter
    I'm trying to configure hMailserver with a 3rd party SSL cert. I'v 1) Installed the SSL key & cert 2) Placed the hash named CA and intermediate in to the \externals\cs folder Now, the connection between the mail client and the server is secure and works. The issue is that mail clients outlook, apple mail, others issue an untrusted cert warning. I've followed several threads on the forums, but none seem to solve this problem

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >