Search Results

Search found 99017 results on 3961 pages for 'server side events'.

Page 1262/3961 | < Previous Page | 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269  | Next Page >

  • Win 7 Remote Desktop connection failure when already logged in.

    - by Andy E
    I have a bit of a strange problem, magnified recently by my broadband dropouts. I wasn't sure whether to post this on SU or SF, so I thought I'd start here as more users would be likely to know what the problem is. In short, when I try and connect to my server (Windows Server 2008) from my laptop running Windows 7, I can only connect if my remote account was previously logged out. If I'm still logged in I get the error message: Windows cannot connect to the remote server. No explanation or anything. If my IP address is the same, I don't have this problem. If I boot up Windows XP Mode and run XP's remote desktop connection it works just fine -- I think the difference there is it takes me to the remote server's logon screen. With Win 7 RDC you never see the logon screen, it asks you for credentials before entering full screen mode. The real problem is that I'm having random broadband dropouts and my IP isn't static. If I logon via Win XP RDC, log out and then run Win 7 RDC then it works fine. I realize I can just use Win XP's RDC for now, but I don't really like keeping XP Mode open if I can help it. Does anyone know a way around this problem? Maybe forcing Win 7 RDC to go to the logon screen, or changing some server-side settings to work around the IP address issue?

    Read the article

  • Why does IIS refuse to serve ASP.NET content?

    - by Michael Haren
    My Windows Server 2003 Std server refuses to server ASP.NET content. It serves regular html just fine but anything .net, even a one line html file with an ASPX extention fails silently. Things I've tried: Nothing in the event log or IIS WWW logs when it fails. Fiddler shows no response I reinstalled .NET with C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727aspnet_regiis.exe -U C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727aspnet_regiis.exe -I I give obscenely high permissions on everything I can think of (full control, read, write, etc.) to all possibly relevant users (IUSER*, ASP.NET, etc.). I confirmed that ASP.Net v1 and v2 Web Service Extensions are "allowed" in IIS Confirmed that the Server Manager had IIS and ASP.Net roles enabled Again: this is the scenario: http://localhost/Test/Default.htm <-- Works great! http://localhost/Test/Default.aspx <-- Bombs silently with no message at all Any guidance will be much appreciated! Solution: I reinstalled per the instructions below and it works now. Thanks all!

    Read the article

  • How to find malicious IPs?

    - by alfish
    Cacti shows irregular and pretty steady high bandwidth to my server (40x the normal) so I guess the server is udnder some sort of DDoS attack. The incoming bandwidth has not paralyzed my server, but of course consuming the bandwidth and affects performance so I am keen to figure out the possible culprits IPs add them to my deny list or otherwise counter them. When I run: netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n I get a long list of IPs with up to 400 connections each. I checked the most numerous occurring IPs but they come from my CDN. So I am wondering what is the best way to help monitor the requests that each IP make in order to pinpoint the malicious ones. I am using Ubuntu server. Thanks

    Read the article

  • High Load average threshold in linux

    - by user2481010
    My one of friend said that his server load average sometime goes above 500-1000, for me it is strange value because I never saw load average more than 10. I asked him give me some snapshot of top and memory usages, he gave following details: TOP USAGES top - 06:06:03 up 117 days, 23:02, 2 users, load average: 147.37, 44.57, 15.95 Tasks: 116 total, 2 running, 113 sleeping, 0 stopped, 1 zombie Cpu(s): 16.6%us, 6.9%sy, 0.0%ni, 9.2%id, 66.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8161648k total, 7779528k used, 382120k free, 3296k buffers Swap: 5242872k total, 1293072k used, 3949800k free, 168660k cached Free $ free -gt total used free shared buffers cached Mem: 7 6 1 0 0 4 -/+ buffers/cache: 1 5 Swap: 4 0 4 Total: 12 6 6 Total cpu $ nproc 8 my question is it possible load average more than 100 on 8 core,12 GB mem Server? because I read many tutorial,article on load average, it said that thumb rule is "number of cores = max load" according to thumb rule here is max load average 16 then how his server running with 147.37 load server? he said that it is least value (147.37) some time goes more than 500.

    Read the article

  • Use Google Apps/Cloud Services as a Domain Controller Replacement

    - by user124548
    This is a Canonical Question about Cloud Services replacing Active Directory. Is it possible to use Google Apps or another Cloud Service as a replacement for a Windows Domain Controller (replacing my whole AD infrastructure)? Specifically, I want to remove our dependence on a local Windows Server; currently it acts as a Domain Controller with File and Print Services. I'd like to seamlessly replace this server with something based on hosted applications. I do not just want to move the server to a dedicated or collocated server. I have yet to figure out how to piece together printer/etc sharing. If anyone has any insight into this, it would be appreciated. The goal is to eventually move all my servers to the cloud then write up a case study on the whole affair.

    Read the article

  • Add second ip address to an existing SQl 2008 failover clustering

    - by Cédric Boivin
    Hello, I got actually a failover cluster on Windows Server 2008, with sql server 2008. On each server i got two network card, with two different network one are on 10.10.10.x and other are on 192.168.99.x I want my sqlserver cluster listen on the two network. Is it possible and how i add new ip address. When i add a new ip address directly in the cluster, and i do a telnet on the 1433 port with the new cluster ip address it's not working. Thanks

    Read the article

  • Can't set up Usermin correctly to allow users to login outside of local network, what am I missing?

    - by thecraic
    I'm fairly new at creating a server, but the biggest problem I am currently having at the moment is getting Usermin set up to be accessible from outside the LAN. I talked to other people that use it and was told that all I need to do is type the url:20000 to access the login screen, but that doesn't work. I have also tried the ip:20000 and that doesn't lead to anything. Instead I get the error message: Error - Bad Request This web server is running in SSL mode. Try the URL https://hostname:10000/ instead. (where hostname is my server's hostname) I know it must be a configuration issue, but I have checked all my settings and as far as I can tell I don't have the ports blocked anywhere. I have the correct ports forwarded on my router and my server firewall doesn't have the port block either. Is there anything I am missing? Any help would be appreciated and I will add more information upon request. Thank You.

    Read the article

  • Does nginx auth_basic work over HTTPS?

    - by monde_
    I've been trying to setup a password protected directory in a SSL website as follows: /etc/nginx/sites-available/default server { listen 443: ssl on; ssl_certificate /usr/certs/server.crt; ssl_certificate_key /usr/certs/server.key; server_name server1.example.com; root /var/www/example.com/htdocs/; index index.html; location /secure/ { auth_basic "Restricted"; auth_basic_user_file /var/www/example.com/.htpasswd; } } The problem is when I try to access the URL https://server1.example.com/secure/, I get a "404: Not Found" error page. My error.log shows the following error: 011/11/26 03:09:06 [error] 10913#0: *1 no user/password was provided for basic authentication, client: 192.168.0.24, server: server1.example.com, request: "GET /secure/ HTTP/1.1", host: "server1.example.com" However, I was able to setup password protected directories for a normal HTTP virtual host without any problems. Is it a problem with the config or something else?

    Read the article

  • In Ubuntu I make changes to php.ini but nothing happens

    - by MrAn3
    Hi, Apache with php works well, but none of the changes I make in php.ini have effect, I've even delete all the contents of the file, then restart Apache, and run phpinfo() and surprisingly everything continues working well. The file I'm editing is the one that appears in the phpinfo() like "Loaded Configuration File". (/etc/php5/apache2/php.ini) P.S. I'm running Ubuntu 9.04 and PHP 5.2 Thanks in advance. More Details: I'm restarting with sudo /etc/init.d/apache2 restart, I've also tried sudo /etc/init.d/apache2 stop, and then start, at restarting I get: Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName ... waiting apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [ OK ] "which php" did not produce any results. My installation of PHP was done using Synaptic Package Manger, choosing "Mark Packages by task" and then LAMP server. I don't have any clue of what to do...

    Read the article

  • Apache httpd.conf handle multiple domains to run the same application

    - by John Stewart
    So what we are looking for is the ability to do the following: We have an application that can load certain settings based on the domain that it is being accessed from. So if you come from xyz.com we show a different logo and if you come from abc.com we show a different logo. The code is the same, running from same server just detects the domain on the run Now we want to get a dedicated server (any suggestions?) that will enable us to point all the doamins that we want to this server (we change the DNS for the domains to that of our server) and then when the user goes to a certain domain they run the same application. Now as far as I can understand we will need to create a "VirtualHost" in apache to handle this. Can we create a wildcard virtualhost that catches all the domains? I am not an expert with Apache at all. So please forgive if this comes out to be a silly question. Any detailed help would be great. Thanks

    Read the article

  • Chroot for Mysql running on Ubuntu 10.10?

    - by Calvin Froedge
    Prompted from a question about MySQL server security best practices, I've been running through this list (with a few minor alterations) to properly secure my server database server: http://www.greensql.net/publications/mysql-security-best-practices On step 10, I'm told to change the root directory for the mysql user using chroot, but very few specifics are provided and I'm not sure where to start. Does anyone know of a good resource for walking me through the steps to properly create a chrooted environment for Ubuntu 10.10?

    Read the article

  • Hyper-V and attaching physical disks [migrated]

    - by Mike Christiansen
    So, I'm looking at rebuilding my home server. My current setup is the following Windows 7 Ultimate 1TB Boot Drive (my smallest drive) Windows Dynamic Spanned volume, continaing 1x 1TB drive, 2x 2TB drives, totalling 5TB. I am upgrading to a hardware RAID controller, and I would like to run Hyper-V server core. However, I want to retain the ability to join my "file server" to a homegroup, so I must use Windows 7. I know VHDs can only be like 127GB or something, so I obviously need to directly connect disks to my Windows 7 machine. Here is my plan: Server Core 2008 R2 (Hyper-V) 1TB Boot Drive (storing VHDs for boot drives of VMs) - possibly in a RAID 1 with my other 1TB drive 5x 2TB drives (1x 2TB drive hot spare), totalling 10TB, directly attached to a Windows 7 VM, for use of homegroup for this array. In the past, I directly attached the windows dynamic volume to a Windows 7 VM, and performance was abysmal. The question is, with hardware RAID, will it really make that much of a difference? Server specs: Intel Core 2 Quad Q9550 2.83GHz Asus Maximus II Formula (PCI-E x16) 8GB DDR2 RAM PC2-6400 (Yes, I know its a bit out of date)

    Read the article

  • ASP Fails with 500 Error

    - by VinceM
    We have a server setup as an IIS box and have some static pages with a few asp pages that handle the form submissions. The asp is really vbscript that sends a CDO message. When moving these pages to the new server the form will not submit, it gives a 500 error and the following shows in Event Viewer: Error: The Template Persistent Cache initialization failed for Application Pool 'DefaultAppPool' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. I can't seem to find any info on this anywhere... I was thinking it may have something to do with the fact that we created this server from an image of another server. Thanks for your help in advance... Vince

    Read the article

  • Trying to get MYLDAPAdmin working on Scientific Linux

    - by techsjs2012
    I am trying to get MYLDAPAdmin working on Scientific Linux. I downloaded it and installed it but I am getting the following message. It looks like my PHP is not setup for LDAP. Can someone help me? Missing required extension Your install of PHP appears to be missing LDAP support. Please install LDAP support before using phpLDAPadmin. (Dont forget to restart your web server afterwards) After adding php-ldap.. I am now getting this error Unable to connect to LDAP server dvldap01.uftwf Error: Can't contact LDAP server (-1) for user error Failed to Authenticate to server Invalid Username or Password.

    Read the article

  • Reusing Raid 5 Drive?

    - by User125
    We have two servers (ML530 G2 and DL380G2) w/ identical HP 10K RPM SCSI drives w/ a raid 5. One is decommissioned and the other will be decommissioned shortly. However, one of the drives on the production server had a drive failure. My hope was to take one of the drives from the decommissioned server and pop it into the production server. Both are running RAID 5. I broke the array on the decomm. server. To my knowledge, that should have wiped out all the volume and partition information. However, I do not know if it is safe to then take a drive from the decomm'ed server and replace the failed drive. Will the existing array see it as a replacement drive, wipe it and rebuild? Or will it fail because it was used in an array before. Are there any remnant data that resides on the drives after deleting a raid 5 array? These servers are 10-15 years old, so we're just trying to keep them alive until we decommission it. I'm not looking to pay a premium to find a vendor that still sells replacement drives for this system.

    Read the article

  • How to make sure clients update their browser cache when my website is updated?

    - by user64204
    I am using the HTTP 1.1 Cache-Control header to implement client-side caching. Since I update my website only once a month I would like the CSS and JS files to be cached for 30 days with Cache-Control: max-age=2592000. The problem is that the 30-day period defined by Cache-Control doesn't coincide with the website update cycle, it starts from the moment the users visit the site and ends 30 days later, which means an update could occur in the meantime and users would be running with outdated content for a while, which could break the rendering of the website if for instance the HTML and CSS no longer match. How can I perform client-side caching of content for periods of several days but somehow get users to refresh their CSS/JS files after the website has been updated? One solution I could think of is that if website updates can be schedule, the max-age returned by the server could be decreased every day accordingly so that no matter when people visit the website, the end of caching period would coincide with the update of the website, but changing the server configuration every day goes against one of my sysadmin principles (once it's running, don't touch it).

    Read the article

  • Exchange Full Access issue

    - by Benjamin Jones
    I was just hired as a System Admin for a small company. They use Exchange 2010 for their Mail Server. I've never had a permission issue like this with Exchange because I worked for a larger firm with less responsibility before. Their old system admin is LONG GONE, so I can't ask him what he did. The issue: Right now ANYONE can gain access to a mailbox and view the mail in the mailbox. This is disabled by default you say and you have to grant them full access ? You are right, but the old System Admin I guess didn't know what he was doing. SO right now user A can open up user B mailbox with out being granted permission. So here is what I found out. Every user in EMC Full Access Permission has Exchange Server group granted. Within the Exchange Server Group, Domain User's is a Member Of. Within Domain User's all user's are listed as Members. So my guess is because of this all users can access ANY mailbox? Well GOOD News. The company is small (35 people) and they are not computer savvy, so hopefully no one has figured out they can open anyone's mailbox.(From what I can tell no). Next thing I did was with my domain user in EMC, delete Exchange Servers Group in FUll Access Permissions and grant access to my user. I made sure that my memeber was apart of the Exchange Server Group. Went to our OWA site and now I don't have permission to my own mailbox. Re did everything to the way it was with my user and now I'm stuck. Any help? I would think granting a single user that is in the Exchange Server group, Full Access to that mailbox would enable them to open that mailbox???? I guess I am wrong.

    Read the article

  • Mail not going through : Sendmail Issue

    - by Zama Ques
    Some of the mails for a particular domain are not getting delivered from our mail server. We are using sendmail for mail server. Fallowing can be seen in log Oct 21 13:24:59 mailser sendmail[5407]: r9L7st1a005405: to=<[email protected]>, delay=00:00:03, xdelay=00:00:03, mailer=esmtp, pri=120539, relay=mailgw.test.in. [164.X.X.19], dsn=2.0.0, stat=Sent (ok: Message 289953693 accepted) FOr other domains like yahoo , gmail etc it is working fine . But if I send the mail through commandline using mailx command from the same server , the message is going through... Oct 21 13:30:37 ssdgweb sendmail[5443]: r9L80RFI005440: to=<[email protected]>, ctladdr=<[email protected]> (502/502), delay=00:00:10, xdelay=00:00:10, mailer=esmtp, pri=120329, relay=mailgw.test.in. [164.X.X.19], dsn=2.0.0, stat=Sent (ok: Message 289955601 accepted) Please let us know what is the issue and how it can be resolved .

    Read the article

  • Files copying between servers by creation time

    - by driftux
    My bash scripting knowledge is very weak that's why I'm asking help here. What is the most effective bash script according to performance to find and copy files from one LINUX server to another using specifications described below. I need to get a bash script which finds only new files created in server A in directories with name "Z" between interval from 0 to 10 minutes ago. Then transfer them to server B. I think it can be done by formatting a query and executing it for each founded new file "scp /X/Y.../Z/file root@hostname:/X/Y.../Z/" If script finds no such remote path on server B it will continue copying second file which directory exists. File should be copied with permissions, group, owner and creation time. X/Y... are various directories path. I want setup a cron job to execute this script every 10 minutes. So the performance is very important in this case. Thank you.

    Read the article

  • Why are my DNS Lookups so long (300+ms) when accessing my web site?

    - by Travis
    I'm running a Fedora 11 server with Apache 2. I'm trying to optimize so things are as fast as possible from the server side, and I'm noticing (via Firebug for Firefox) that upon loading the homepage of one of the sites on the web server that for every file it loads (HTML, CSS, JavaScript, GIF, PNG, JPG, etc.), it does a DNS lookup. All of the files it is looking up are local to the server, so I'm surprised to see it even do a DNS lookup. Also, each of these lookups is in the 150-450ms range, which is way too high for my liking. I've tried adjusting /etc/resolve.conf to use Google's Public DNS servers. I restarted the network service and tapped the page again, but the numbers didn't go down. I've reverted back to the default DNS servers since I didn't see any gain. Any ideas on what is causing it to: a) do the dns lookup in the first place, and b) take so long when doing the actual lookup? Thanks in advance.

    Read the article

  • Dynamically changing one-node Cassandra cluster to two nodes

    - by Jason Axelson
    So I have an application that will be very dormant most of the time but will need high-bursting a few days out of the month. Since we are deploying on EC2 I would like to keep only one Cassandra server up most of the time and then on burst days I want to bring one more server up (with more RAM and CPU than the first) to help serve the load. What is the best way to do this? Should I take a different approach? Some notes about what I plan to do: Bring the node up and repair it immediately After the burst time is over decommission the powerful node Use the always-on server as the seed node My main question is how to get the nodes to share all the data since I want a replication factor of 2 (so both nodes have all the data) but that won't work while there is only one server. Should I bring up 2 extra servers instead of just one?

    Read the article

  • Resource consumption of FreeBSD's jails

    - by Juan Francisco Cantero Hurtado
    Just for curiosity. An example machine: an dedicated amd64 server with the last stable version of FreeBSD and UFS for the partitions. How much resources consume FreeBSD for each empty jail? I mean, I don't want know what is the resource consumption of a jailed server or whatever, just the overhead of each jail. I'm especially interested on CPU, memory and IO. For a few jails the overhead is negligible but imagine a server with 100 jails.

    Read the article

  • How to install (old) packages for Ubuntu 9.04?

    - by wchrisjohnson
    Based on some excellent feedback by Mark here (http://serverfault.com/questions/285598/should-i-clone-a-physical-server-to-create-a-vm-for-a-staging-server), today I was able to use the vmware converter to clone my production server for a staging server. However the nic won't come up no matter what I do. I attempted to inistall vmware tools, as I suspect that the fact that it is not installed might prevent the nic from working. (I have the nic set as a vmxnet3 card in the vm settings). The install failed because there were several dependencies missing as well as the Linux headers. Given that Ubuntu 9.04 has been EOL'd, the packages I need to install to get the vmware tools to install are no longer available. I doubt the ubuntu 9.04 install CD has the packages on it. What are my options? I'd rather not upgrade the version of Ubuntu yet, as the point of the vm right now is to maintain parity with the production server. Might I have better luck resetting the driver to use vmxnet2 instead of the vmxnet3? Thanks in advance! Chris

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • IIS, SSL with client certs on web farm

    - by Jeremy
    We're building a web service that will be deployed on an IIS 7.5 farm, and secured through SSL, and also requiring client certs that will be mapped to Active Directory accounts. My understanding is that the server cert needs to be generated for a specific server. If that is the case then we will need a server cert for each server in the farm. Because the farm will be load balanced, how do we generate client certs that will work with any of the servers in the farm?

    Read the article

< Previous Page | 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269  | Next Page >