Search Results

Search found 93962 results on 3759 pages for 'server configuration'.

Page 294/3759 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • openVAS - Microsoft RDP Server Private Key Information Disclosure Vulnerability - false Alarm?

    - by huebkov
    I performed a openVAS scan on a Windows Server 2008 R2 and got a report for a high threat level vulnerability called Microsoft RDP Server Private Key Information Disclosure Vulnerability. An remote attacker could perform a man-in-the-middle attack to gain access to a RDP session. Affected Software is Microsoft RDP 5.2 and below. My server uses RDP 7.1, is this alarm a false alarm? Security Advisor Pages say: Solution Status Unpatched, No remedy... References http://secunia.com/advisories/15605/ http://xforce.iss.net/xforce/xfdb/21954/ http://www.oxid.it/downloads/rdp-gbu.pdf CVE: CVE-2005-1794 BID:13818

    Read the article

  • Windows Server 2008 without telnet client - how to test connecting to remote ports without installing anything new?

    - by S. Cobbs
    I'm looking to see if anyone knows of slick tricks to test connections to remote server ports from Windows server 2008 and variants that don't include the telnet client installed by default. Reason being, I sometimes have clients that want to connect to port 25 for example on a remote server and say they can't. I used to run a quick test by using "telent mailserver.tld 25" or whatever to see if I could get a response on that port. I don't want to have to install the telnet client just to test this if I dont have to - are there any other native windows utilities that will allow me to connect to a remote port?

    Read the article

  • How much does SQL Server Web Edition cost? + other related questions

    - by Goma
    Hello. I visited Microsoft pricing page about SQL Server databases, but it was not that clear for me. I want to know the exact cost of SQL Server Web Edition. Furthermore, I would like to know how can I get it if I am with VPS hosting? Should I install it by myself or will they install it for me? And finally, is there a web host that provide SQL Server Web edition so I pay for them directly with the hosting package?

    Read the article

  • HAProxy overload protection

    - by user2050516
    using the HAProxy, would it be possible to configure an overload protection, to limit the amount of requests sent to the backing http server(s) to a given rate (z.B 100 Request per second ). If the threshold is exceeded requests should be answered with a default response. I am interested in requests per second not connections per second as a connection can have many requests. And yes to improve the servers is not an option here. If yes a configuration example to achieve that would be excellent. Thank you in advance.

    Read the article

  • How do I reduce RAM usage on my server?

    - by Abs
    I have recently launched a site that is very popular but I am having trouble with scalability. My site makes heavy use of FFmpeg and at peak times RAM usage hits the 2 GB point quickly and the swap file starts getting used. CPU usage starts rising too. Users complain that the site is slow. They say this because all FFmpeg instances run very slow because of the number running at the same time. Users make use of FFmpeg on my server in real time. Is there anything I can consider or do to ease down the usage of the server and RAM just shooting up? Maybe there is something better than FFmpeg (!). Is the only solution "throwing some cash" at a more powerful server? I have given little information, please ask for more, so this problem can be solved.

    Read the article

  • Is it necessary for a Windows Server 2008 R2 to join a domain so that its IIS can communicate correctly?

    - by Jack
    I have a Windows Server 2008 R2 that is not join to any domain. I have developed an web application that will display the domain name and the username on the server itself. However, when I publish my web application to IIS, it always fail and display different types of error messages (because I change settings such as Enabled ASP.NET Impersonation, Disable Anonymous Authentication, Set Application Pool to Classic and so on) So, I was wondering if it is necessary for the Server to join in a domain so that I can reduce any unnecessary error message and be able to zoom into the correct direction?

    Read the article

  • Haproxy, configure for one host

    - by Michal K.
    I have to use haproxy on one machine. I want to do redirect requests from Ip to the same ip (with another port). My configuration (doesn't work): lobal maxconn 4096 # Total Max Connections. This is dependent on ulimit daemon nbproc 1 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example. defaults mode http clitimeout 600000000 srvtimeout 600000000 contimeout 400000000 log 127.0.0.1 local0 log 127.0.0.1 local1 notice option httpclose # Disable Keepalive listen http_proxy 127.0.0.1:8080 balance leastconn # Load Balancing algorithm acl acl_apache path_end .avi .jpeg #option httpchk option forwardfor # This sets X-Forwarded-For ## Define your servers to balance server DE2 127.0.0.1:8080 weight 1 maxconn 15 check

    Read the article

  • strange memory usage pattern on windows server 2008 on login through remote desktop..

    - by headsling
    I'm running Windows Server 2008 Datacenter Service Pack 2 on a VM Ware instance with 10Gb ram allocated. I'm not running IIS or SQL Server. Under 'normal' conditions, the machine uses ~5.5Gb of memory. However, when I login to the server through remote desktop, the memory usage slowly climbs up to 9.8Gb of memory in use. After several minutes the memory slowly creeps back down to the 5.5Gb mark. I've tried killing all the processes associated with my login, on login, barring the taskmanager without success, and I can't see any process that is growing in memory usage when the memory is increasing. I'm assuming this is some system level cache that is growing / shrinking... but why is it doing this?

    Read the article

  • A require a server hosting package that would be suitable for several .net managed applications, accessed only by me

    - by user67166
    Hello, I currently run a server at home consisting of SQL Server 2008 .net Framework 2010 VPN Connection ASP.net Web Services running around 5-6 applications supporting a financial trading system that i regularly use. THe only user is me. Recently the requirement to have these applications running in a 24/7 100% uptime (or 99%) environment has become important. No longer can I both meet this requirement and host my server at home on my network - so i am looking to move to a dedicated hosting company. After some research, the only real companies I can find offering such services are geared towards company web-space hosting. I don't need 1TB+ bandwidth, what i need is CPU, Memory and as much control over the environment as possible. Does anyone have any examples of such a service? Thanks in advance.

    Read the article

  • Is it possible to extend the ad schema in a Win2003 DC Server (NOT R2) to support DFSR?

    - by JohannesH
    we're in the process of installing a brand new Windows Server 2008 Web cluster and we would like to synchronize some files between the servers. The problem is that the DC in the domain is an old Windows Server 2003 Standard (NOT R2) which apparently doesn't contain some extension to the AD schema. Is it possible to upgrade the schema without upgrading the DC servers to R2? When I try to create a Replication Group on the 2008 Server I get the following message: --------------------------- Error --------------------------- srv.XXXXXX.XX: The Active Directory Domain Services schema on domain controller activedc07.srv.XXXXXX.XX cannot be read. This error might be caused by a schema that has not been extended, or was extended improperly. See Help and Support Center for information about extending the Active Directory Domain Services schema. Schema version 30 is not supported. --------------------------- OK ---------------------------

    Read the article

  • What could slow Excel on one PC but not another?

    - by zrz
    I have 2 PC with the same configuration. I open an Excel File (~5M) on the network from both PC. The opening is not the fastest but that's ok. The problem is that on one PC, Excel is really slow. I mean if I hit the left arrow 10 times, I will have finished hitting like 3 seconds before the active cell is the next 10th one. The file contains graphics that takes time to initialize on the slowed computer. Both PC have the same graphic cards, same driver version; both remote access to the file on a local network. Both configured to perform calculations automatically. Both Excel 2007. Both Windows 32bit. On the other PC it runs really fast. I really don't know what to check next. Any suggestions ?

    Read the article

  • WPF Visual Studio Package gives error: Could not find endpoint element with name 'WCFname' and contr

    - by Andrei
    Hi everybody. This error has been covered before in other questions, however not for a Visual Studio package. Could not find endpoint element with name 'WCFname' and contract 'WCFcontract' in the ServiceModel client configuration section. I have a VS package project that needs to connect to a WCF service that provides some functionality. I add a reference to the WCF service and Visual Studio automatically creates the content for the configuration file. config file: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IWCFSearchService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://localhost:8732/Design_Time_Addresses/WCFSearchServiceLibrary/Service1/" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IWCFSearchService" contract="WCFSearchServiceReference.IWCFSearchService" name="WSHttpBinding_IWCFSearchService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> However, when I run the application (in VS experimental mode) it doesn't seem to take the provided configuration file (app.config). Everytime it just throws this error: Could not find endpoint element with name 'WSHttpBinding_IWCFSearchService' and contract 'WCFSearchServiceReference.IWCFSearchService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this name could be found in the client element. My guess is that it's taking the configuration file for Visual Studio (since it is running VS experimental mode). So yeah...why isn't it recognizing the app.config file and how could I make the application to recognize it? Any help would be very welcomed as I have already tried to fix this for some time. Thanks.

    Read the article

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • Configuring ASP.NET MVC2 on Apache 2.2 using mod_aspdotnet

    - by user40684
    Trying to get an MVC2 website to run on Apache 2.2 web server (running on Windows) that utilizes the mod_aspdotnet module. Have several ASP.NET Virtual Hosts running, trying to add another. MVC2 has NO default page (like the first version of MVC had e.g default.aspx). I have tried various changes to the config: commented out 'DirectoryIndex', changed it to '/'. Set 'ASPNET' to 'Virtual', will not load first page, always get: '403 Forbidden, You don't have permission to access / on this server.' Below is from my http.conf: LoadModule aspdotnet_module "modules/mod_aspdotnet.so" AddHandler asp.net asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo <IfModule aspdotnet_module> # Mount the ASP.NET /asp application #AspNetMount /MyWebSiteName "D:/ApacheNET/MyWebSiteName.com" Alias /MyWebSiteName" D:/ApacheNET/MyWebSiteName.com" <VirtualHost *:80> DocumentRoot "D:/ApacheNET/MyWebSiteName.com" ServerName www.MyWebSiteName.com ServerAlias MyWebSiteName.com AspNetMount / "D:/ApacheNET/MyWebSiteName.com" # Other directives here <Directory "D:/ApacheNET/MyWebSiteName.com"> Options FollowSymlinks ExecCGI AspNet All #AspNet Virtual Files Directory Order allow,deny Allow from all DirectoryIndex default.aspx index.aspx index.html #default the index page to .htm and .aspx </Directory> </VirtualHost> # For all virtual ASP.NET webs, we need the aspnet_client files # to serve the client-side helper scripts. AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows /Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymlinks Order allow,deny Allow from all </Directory> </IfModule> Has anyone successfully run MVC2 (or the first version of MVC) on Apache with the mod_aspdotnet module? Thanks !

    Read the article

  • Server Admin is not allowing me to configure DNS

    - by Clinton Blackmore
    We have a Mac OS X 10.5.8 Server running DNS (and a few other services). When I connect to it (using Server Admin 10.5.3 [which comes from the Server Admin 10.5.7 tools]), and click to look at the DNS settings, all appears normal -- it shows many reverse entries and two top-level domains. However, when I select one of our domains and open the disclosure triangle, the list is empty! [There should be over a dozen entries, and the reverse entries do show up.] If I then tell it I want to add, say, an A Record to the domain, almost everything disappears -- and I am left with a list showing our two domains, one with a disclosure triangle underneath it showing a single entry, and one reverse entry to correspond to the new A record. named appears to be working fine. DNS names resolve. It appears to simply be that Server Admin is having problems with the data on the computer. No one here would have manually created a DNS entry. Now, while I think I've backed up the DNS (I backed up /var/named/, /etc/named.conf, and /etc/dns/, as mentioned here), I'm really not sure if just replacing the files would restore the DNS settings we have if things go south. I am contemplating going to settings and changing the log level from "Information" to "Debug", but 1) I am just a little concerned that it might write a bad configuration to the disk, and 2) I think it would only affect named and not Server Admin, and, so far as I can tell, named is not having a problem. (Nothing looks strange in /Library/Logs/named.log when I open it via Console/Terminal. Oddly, though, when I click on the 'log' button for DNS in Server Admin, I see no text at all, just a fully white window. When I look at one of our secondary DNS servers, I am able to see the log file through Server Admin.) This entry appears in the system log when I run Server Admin on the server: Jun 17 09:02:08 od1 Server Admin[3892]: Unexpected call to doMarkConfigurationAsDirty by 'DNS' plugin during updateConfigurationViewFromDescription It seems to occur after I've looked at DNS, look at another service, and then click back on DNS. Think that the most likely cause is a corrupt configuration file, I glanced through all the files that I backed up, and none of them is obviously gobbledygook. Here are some oddities I find when running Server Admin from a remote computer to manage the DNS. When I click to see the log file for DNS, the server starts writing messages like these to its system.log: Jun 17 09:59:04 od1 kernel[0]: Limiting open port RST response from 252 to 250 packets per second Jun 17 09:59:06 od1 kernel[0]: Limiting open port RST response from 258 to 250 packets per second This stops when I click on a different service. The inderterminate progress indicator (the spinning wheel that appears beside the "Revert" and "Save" buttons in the bottom-right corner of Server Admin) looks really strange. As far as I can tell, instead of just spinning and waiting, it is being told to start spinning repeatedly, resulting in a jerky animation. Here are some of the messages being logged on the computer running Server Admin: At startup: *** ERROR: -[GRAxes computeLayout]:1124 - plotRect height = 0.000000 <= 0.0 *** *** ERROR: -[GRChartView computeLayout]:1194 - Layout for overlay axes (0x18758f50) failed. *** (These messages don't concern me too much as they go away for a while if you delete ~/Library/Preferences/com.apple.ServerAdmin.plist). At shutdown: 2010-06-17 10:02:17.202 Server Admin[7770:10b] *** -[GroupTextField windowDidResignKey:]: unrecognized selector sent to instance 0x16e12490 More concerning are these messages: 2010-06-17 09:59:47.269 Server Admin[7770:10b] Unexpected call to doMarkConfigurationAsDirty by 'DNS' plugin during updateConfigurationViewFromDescription Server Admin(7770,0xb0453000) malloc: *** error for object 0x1c115390: double free *** set a breakpoint in malloc_error_break to debug 2010-06-17 10:01:00.795 Server Admin[7770:10b] *** -[ServiceEntry sessionHost]: unrecognized selector sent to instance 0x2af500 Any thoughts on: what the problem is how I can troubleshoot it or how to fix it? If I do need to wipe out DNS and restart, is there a good way to do this?

    Read the article

  • Where to place Nginx IP blacklist config file?

    - by ProfessionalAmateur
    I have an Nginx web server hosting two sites. I created a blockips.conf file to blacklist IP addresses that are constantly probing the server and included this file in the nginx.conf file. However in my access logs for the sites I still see these IP addresses showing up. Do I need to include the black list in each site's conf instead of the global conf for Nginx? Here is my nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; # Load virtual host configuration files. include /etc/nginx/sites-enabled/*; # BLOCK SPAMMERS IP ADDRESSES include /etc/nginx/conf.d/blockips.conf; } blockips.conf deny 58.218.199.250; access.log still shows this IP address. 58.218.199.250 - - [27/Sep/2012:06:41:03 -0600] "GET http://59.53.91.9/proxy/judge.php HTTP/1.1" 403 570 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" "-" What am I doing incorrectly?

    Read the article

  • PHPMyAdmin: "General relation features: Disabled"

    - by Simón
    I've been looking around for something like this for a while, and I've found some tips on similar issues, but not exactly the same. I really don't know what to do. I downloaded and installed WAMP, and I have a MySQL and PHPMyAdmin setup according to common indications that can be found everywhere (securing MySQL root account, etc.). When I log into PHPMyAdmin (either as root or as pma), I see the following message at the bottom of the page: The additional features for working with linked tables have been deactivated. To find out why click here. And when following the link, got a page with the following: Server: localhost $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... OK General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... OK Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... OK $cfg['Servers'][$i]['pdf_pages'] ... OK Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... OK Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... OK SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... OK Designer: Disabled Somebody please explain to me, why the heck if all settings are "OK" the features remain "Disabled"? Note: at first all the settings were "not OK" and I managed to add the settings to config.inc.php, and then created the tables using scripts/create_tables.php. Of course I have already tried restarting the server or clearing the browser cache (several times, so I am sure the problem comes elsewhere).

    Read the article

  • Apache2 VirtualHosts 403 Oddity

    - by Carson C.
    I'm sure this is something I should already understand, but I'm finding myself confused. The configs in play add up to this: NameVirtualHost *:80 Listen 80 <VirtualHost *:80> <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName domain.tld ServerAlias *.domain.tld DocumentRoot /var/www/domain.tld <Directory /var/www/domain.tld> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> DNS is working correctly. The issue is, every variant of http://*.domain.tld/ (including http://domain.tld/) works correctly, except http://www.domain.tld/ which throws a 403. The logs state: client denied by server configuration: /etc/apache2/htdocs If I remove the first VirtualHost block from play, everything works as expected including http://www.domain.tld. This leads me to believe that for some reason, Apache is not considering www.domain.tld to match the second VirtualHost block, and is thereby falling back to deny all. This seems wrong. Shouldn't the second block match www.domain.tld? I've been able to resolve this, but I still don't understand why. In my original configs, I was using the real ip address of the server instead of *. Switching all instances to * as shown above made everything work as expected. Does this have something to do with the way browsers request resources?

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • ConfigMgr 2012 - How to automatically make updates available to computers without forcing them to be installed?

    - by Massimo
    I'm using System Center Configuration Manager 2012 with the Software Update Point feature; however, in this environment patching has to be strictly manual, because server reboots need to be approved and scheduled by different people; thus, I need to use ConfigMgr's SUP like I would use a plain WSUS server with auto-approval but with manual installation. I created some Automatic Deployment Rules to automatically download and deploy critical updates, and to have an installation dealine of "as soon as possible"; but then, I've also configured those rules to not do anything when the deadline is reached, and to not perform system restarts even if needed (see image). Also, I've configured the device collection to where those rules deploy updates to not have any valid maintencance window. However, I'm experiencing quite the opposite as what I was expecting: as soon as the new updates are processed by the ADRs, they get automatically installed on all systems by the Software Center, and the computers are subsequently restarted. Why is this happening? Am I getting something wrong or is just ConfigMgr 2012 not behaving like it should?

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

  • Strategy to isolate multiple nginx ssl apps with single domain via suburi's?

    - by icpu
    Warning: so far I have only learnt how to use nginx to serve apps with their own domain and server block. But I think its time to dive a little deeper. To mitigate the need for multiple SSL certificates or expensive wildcard certificates I would like to serve multiple apps (e.g. rails apps, php apps, node.js apps) from one nginx server_name. e.g. rooturl/railsapp rooturl/nodejsapp rooturl/phpshop rooturl/phpblog I am unsure on ideal strategy. Some examples I have seen and or thought about: Multiple location rules, this seems to cause conflicts between the individual app config requirements, e.g. differing rewrite and access requirements Isolated apps by backend internal port, is this possible? Each port routing to its own config? So config is isolated and can be bespoke to app requirements. Reverse proxy, I am little ignorant of how this works, is this what I need to research? is this actually 2 above? Help online seems to always proxy to another server e.g apache What is an effective way to isolate config requirements for apps served from a single domain via sub uris?

    Read the article

  • Give Markup support for Custom server control with public PlaceHolders properties

    - by ravinsp
    I have a custom server control with two public PlaceHolder properties exposed to outside. I can use this control in a page like this: <cc1:MyControl ID="MyControl1" runat="server"> <TitleTemplate> Title text and anything else </TitleTemplate> <ContentTemplate> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:Button ID="Button1" runat="server" Text="Button" /> </ContentTemplate> </cc1:MyControl> TitleTemplate and ContentTemplate are properties of type asp.net PlaceHolder class. Everything works fine. The control gets any content given to these custom properties and produces a custom HTML output around them. If I want a Button1_Click event handler, I can attach the event handler in Page_Load like the following code. And it works. protected void Page_Load(object sender, EventArgs e) { Button1.Click += new EventHandler(Button1_Click); } void Button1_Click(object sender, EventArgs e) { TextBox1.Text = "Button1 clicked"; } But if try to attach the click event handler in aspx markup I get an error when running the application "Compiler Error Message: CS0117: 'ASP.default_aspx' does not contain a definition for 'Button1_Click' <asp:Button ID="Button1" runat="server" Text="Button" OnClick="Button1_Click" /> AutoEventWireup is set to "true" in the page markup. This happens only for child controls inside my custom control. I can programatically access child control correctly. Only problem is with event handler assignment from Markup. When I select the child Button in markup, the properties window only detects it as a < BUTTON. Not System.Web.UI.Controls.Button. It also doesn't display the "Events" tab. How can I give markup support for this scenario? Here's code for MyControl class if needed. And remember, I'm not using any ITemplate types for this. The custom properties I provide are of type "PlaceHolder". [ToolboxData("<{0}:MyControl runat=server>" + "<TitleTemplate></TitleTemplate>" + "<ContentTemplate></ContentTemplate>" + "</{0}:MyControl>")] public class MyControl : WebControl { PlaceHolder contentTemplate, titleTemplate; public MyControl() { contentTemplate = new PlaceHolder(); titleTemplate = new PlaceHolder(); Controls.Add(contentTemplate); Controls.Add(titleTemplate); } [Browsable(true)] [TemplateContainer(typeof(PlaceHolder))] [PersistenceMode(PersistenceMode.InnerProperty)] public PlaceHolder TitleTemplate { get { return titleTemplate; } } [Browsable(true)] [TemplateContainer(typeof(PlaceHolder))] [PersistenceMode(PersistenceMode.InnerProperty)] public PlaceHolder ContentTemplate { get { return contentTemplate; } } protected override void RenderContents(HtmlTextWriter output) { output.Write("<div>"); output.Write("<div class=\"title\">"); titleTemplate.RenderControl(output); output.Write("</div>"); output.Write("<div class=\"content\">"); contentTemplate.RenderControl(output); output.Write("</div>"); output.Write("</div>"); } }

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >