Search Results

Search found 52899 results on 2116 pages for 'ado net'.

Page 1987/2116 | < Previous Page | 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994  | Next Page >

  • Screen occasionally flashes black when under load, sometimes does not recover

    - by Oak
    I've built a brand new machine, but to my horror my monitor occasionally flashes black for around a second, then returning to normal. This happens under load (watching videos / playing games) but only sometimes; e.g. it doesn't occur in "Batman: Arkham City" but does in "XCOM: Enemy Unknown". When watching videos, it also occurs when not watching them full-screen, and it sometimes even occurs when the machine isn't doing anything, just sitting at the desktop and moving the mouse around. Has anyone ran into this problem and knows of any solution? Additionally, sometimes after the black screen, it won't return to normal, instead turning completely corrupt. In these cases even quitting the application doesn't help, but physically disconnecting and reconnecting the monitor fixes the problem. This problem did not occur on my earlier machine which used the same physical monitor. Additional details: Windows Server 2012, configured as Windows 8, with latest updates installed NVIDIA GeForce GTX 660 Ti, with latest driver installed Ample amounts of CPU and RAM for playing the above games and for watching videos. I've read about similar problems elsewhere but could not find a working solution: http://www.youtube.com/watch?v=Zt00C-HXFbA&noredirect=1 http://www.sevenforums.com/hardware-devices/59126-monitor-flashing-black.html https://eu.battle.net/d3/en/forum/topic/4079098908?page=4 http://www.tomshardware.com/forum/347422-33-screen-flickering-black-nvidia-driver-update

    Read the article

  • Why are ISP's installing routers on my site when the feed is a form of ethernet already?

    - by Cosmin Prund
    I'm connected to 3 ISP's right now. Two of them already have routers at my site, the third one announced me "they need to install some equipment" when I requested BGP session. I can only assume they need to install a Router, since that connection is now working fine, using the usual /30 net block for the connection, and the "last-mile" solution is not going to change since they only installed it last week and the BGP was in the contract from the beginning. I simply don't understand this: the "feed" is already a form of ethernet. Even those they're using different technologies for the last mile, they're all entering the ISP router using an RJ45 WAN port. I assume the ISP router does something really important that can't be done by the Big Router on the other end of the connection. It must also be something that can hurt them if miss-configured, since they don't trust us (the client) to do the stuff on our router. And I'm not talking cheap throw-away routers here: One of the routers is Cisco 2800. Edit to add network details: I'm connected to 3 ISP's, two over Radio links, one over Fiber Optic. One of the radio links is going to get dropped and the other radio link will be turned into fiber sometime next year. The fiber is 20 Mbit, radio 1 is 40 Mbit and radio 2 is 2 Mbit. I've got a /24 of provider independent address space. I'm not doing out-of-the ordinary stuff with my network, I'm overly connected because my network needs to be "up" all the time.

    Read the article

  • In Stud, which Private RSA Key should be concatenated in the x509 SSL certificate pem file to avoid "self-signed" browser warning?

    - by Aaron
    I'm trying to implement Stud as an SSL termination point before HAProxy as a proof of concept for WebSockets routing. My domain registrar Gandi.net offers free 1-year SSL certs. Through OpenSSL, I generated a CSR which gave me two files: domain.key domain.csr I gave domain.csr to my trusted authority and they gave me two files: domain.cert GandiStandardSSLCA.pem (I think this is referred to as the intermediary cert?) This is where I encountered friction: Stud, which uses OpenSSL, expects there to be an "rsa private key" in the "pem-file" - which it describes as "SSL x509 certificate file. REQUIRED." If I add the domain.key to the bottom of Stud's pem-file, Stud will start but I receive the browser warning saying "The certificate is self-signed." If I omit the domain.key Stud will not start and throws an error triggered by an OpenSSL function that appears intended to determine whether or not my "pem-file" contains an "RSA Private Key". At this point I cannot determine whether the problem is: Free SSL cert will always be self-signed and will always cause browser to present warning I'm just not using Stud correctly I'm using the wrong "RSA private key" The CA domain cert, the intermediary cert, and the private key are in the wrong order.

    Read the article

  • Connecting to same public IP from different locations yields different results

    - by DHall
    Since yesterday I've been unable to access one of my favorite time-wasting sites, boston.com. It starts to load but then it gets redirected to pagesinxt or something like that. After some investigation, I've narrowed it down to an issue with cache.boston.com, but only from my work location. I found the IP (216.38.160.107) , but even that doesn't work correctly from here at work. When I do a telnet 216.38.160.107 80 GET http://cache.boston.com/universal/css/hp_bgcom.css from another location, I get a nice long CSS, as expected. From here, I get an error (trimmed for size): HTTP/1.1 400 Bad Request Your request could not be processed. Request could not be handled This could be caused by a misconfiguration, or possibly a malformed request. For assistance, contact your network support team. Is there any way I can troubleshoot this further on my end? Tracert doesn't tell me anything too useful: Tracing route to vwrpx1.ttn.xpc-mii.net [216.38.160.107] over a maximum of 30 hops: 1 * * * Request timed out. Since it's not really work-related, I don't really want to bring it up to our network team unless I know what's going on, or if there's some risk to the network (ex. malware or something)

    Read the article

  • EPP Protocol create multiple domains in one command

    - by yannis hristofakis
    I've seen <domain:check> command can check multiple domains in one command. Is it possible to do the same for the <domain:create>? <?xml version="1.0" encoding="UTF-8" standalone="no"?> <epp xmlns="urn:ietf:params:xml:ns:epp-1.0"> <command> <create> <domain:create xmlns:domain="urn:ietf:params:xml:ns:domain-1.0"> <domain:name>example.com</domain:name> <domain:period unit="y">2</domain:period> <domain:ns> <domain:hostObj>ns1.example.com</domain:hostObj> <domain:hostObj>ns1.example.net</domain:hostObj> </domain:ns> <domain:registrant>jd1234</domain:registrant> <domain:contact type="admin">sh8013</domain:contact> <domain:contact type="tech">sh8013</domain:contact> <domain:authInfo> <domain:pw>2fooBAR</domain:pw> </domain:authInfo> </domain:create> </create> <clTRID>ABC-12345</clTRID> </command> </epp>

    Read the article

  • How to make an x.509 certificate from a PEM one?

    - by Ken
    I'm trying to test a script, locally, which involves uploading a file using a Java-based program to a FileZilla FTPES server. For the real thing, there is a real certificate on the FZ server, and the upload step (tested alone) seems to work fine. I've installed FileZilla Server on my dev box (so it'll test uploading from localhost to localhost). I don't have a real certificate for it, of course, so I used the "Generate new certificate..." button in FZ. It works fine from an interactive FTPES program (as long as I OK the unknown cert), but from my Java program it throws a javax.net.ssl.SSLHandshakeException ("unable to find valid certification path to requested target"). So how do I tell Java that this certificate is OK with me? (I know there's a way to change the Java program to accept any certificate, but I don't want to go down that route. I want to test it just as it will happen in production, and I don't want to ignore unknown certificates in production.) I found that Java has a program called "keytool" that seems to be for managing this sort of thing, but it complains that the certificate file that FZ generated is not an "x.509" file. A posting from the FZ side said it was "PEM encoded". I have "openssl" here, which looks like it's perfect for converting between certificate formats, but I think my understanding of certificate formats is wrong because I'm not seeing anything obvious. My knowledge of security certificates is a bit shaky, so if my title is stupidly wrong, please help by fixing that. :-)

    Read the article

  • The Wifi is working fine but no internet connection in Windows 8 [migrated]

    - by Ali
    I'm currently having a Problem with my msi GT70 laptop. my laptop is running windows 8 and yesterday it requested a Restart to Update. after the Restart and the update I tried to surf the net through google chrome, the WiFi connection is perfect but the page I tried to access is not loading at all, after a while it shows failure to load page. I disabled and reenabled the WiFi chip through the device manager but still no internet connection. I uninstalled and reinstalled the drivers, still no internet. I updated the Driver of the wifi, still no internet, I even went to Wifi configuration and tried to change the DNS and reset it back to automatically, still no Luck.. I'm really lost I don't know what to do, I don't want to go deep and play with Laptop DNA "aka: registry file" and screw things up. I really appreciate any help in this matter. thanks in advance. Note: I tried to access many pages but no luck. I even tried Firefox, Opera, even ie still no luck. The internet is working fine on my tablet and cellphone, except for the Laptop.

    Read the article

  • Missing drive space in Server 2003

    - by Tim Brigham
    I have two drives used for SQL backups which for the last week have been acting strange - the free space indicated by windows is far off from what windirstat, etc indicates. There should only be about 60 GB of drive space used and there is about 160. This would match the utilization if the two last backup files were still residing on disk. SQL server is 2000, OS Server 2003 x64. Running on a VMware 5.0 cluster. OSSEC and McAfee for this system shows clean. My current plan is to temporarily attach one of these drives this drive to another VM for analysis. Is there anything more I should be looking at? There were a lot of pages on the net when I was looking for documentation on this issue but I haven't found this case described. EDIT: Unfortunately even a full reboot did not clear this behavior. I also used process explorer to look for open file handles. No dice.

    Read the article

  • SQL Server performance on VSphere 4.0

    - by Charles
    We are having a performance issue that we cannot explain with our VMWare environment and I am hoping someone here may be able to help. We have a web application that uses a databases backend. We have an SQL 2005 Cluster setup on Windows 2003 R2 between a physical node and a virtual node. Both physical servers are identical 2950's with 2x Xeaon x5460 Quad Core CPUs and 64GB of memory, 16GB allocated to the OS. We are utilizing an iSCSI San for all cluster disks. The problem is this, when utilizing the application under a repeated stress testing that adds CPUs to the cluster nodes, the Physical node scales from 1 pCPU to 8 pCPUs, meaning we see continued performance increases. When testing the node running Vsphere, we have the expected 12% performance hit for being virtual but we still scale from 1 vCPU to 4 vCPUs like the physical but beyond this performance drops off, by the time we get to 8 vCPUs we are seeing performance numbers worse than at 4 vCPUs. Again, both nodes are configured identically in terms of hardware, Guest OS, SQL Configurations etc and there is no traffic other than the testing on the system. There are no other VMs on the virtual server so there should be no competition for resources. We have contacted VMWare for help but they have not really been any suggesting things like setting SQL Processor Affinity which, while being helpful would have the same net effect on each box and should not change our results in the least. We have looked at all of VMWare's SQL Tuning guides with regards to VSphere with no benefit, please help!

    Read the article

  • Creating a bootable USB drive from a distro split over two DVD ISOs

    - by Kev
    I am searching and not finding the right way to do this. Please note, I don't think I'm trying for anything strange here. I just want to make a bootable USB stick of a single OS that happens to be larger than one DVD and happens to be larger than FAT32 will allow for in a single file. On our slow connection I spent a long time downloading CentOS 5.9's two DVD ISOs: CentOS-5.9-x86_64-bin-DVD-1of2.iso (4.4 GB) CentOS-5.9-x86_64-bin-DVD-2of2.iso (718 MB) I have a USB stick that I want to somehow get these two ISOs on. Since the first one is 4.4 GB, I can't use ISO2USB because it insists on FAT32. I cannot find an alternative that lets you specify more than one ISO image--of the same distro, I'm not trying for some fancy multi-boot thing--to put on the same stick. I guess I should have downloaded the CD ISOs, but I thought I was "saving time" because then I wouldn't have as many files to run through the md5 checker. There's no IMG file of the whole thing (only a net install version, which I don't want--I want to pre-download everything) otherwise I would've gone for that. So, given that I have these two DVD ISOs, how can I get them on a stick that will boot and make use of both of them properly to install CentOS somewhere? Again, I don't think this is anything out of the ordinary, yet I can't find software/docs that seem to support this. Am I stuck re-downloading everything in CD-sized ISOs just to do this? I found this, but it doesn't run on Windows. I am using Windows to prepare the stick.

    Read the article

  • Why is IIS 7.5 seeing some requests as HTTP/1.0?

    - by Zhaph - Ben Duguid
    While trying to work out why Static File Compression wasn't working on one of our IIS servers, the error was coming back as "NO_COMPRESSION_10" which translates to: Server not configured to compress 1.0 requests Looking at the requests in Fiddler, I can see that I'm requesting HTTP 1.1, but everything is being sent back as HTTP 1.0: Request (from chrome, captured via Fiddler): GET /css/reset.css HTTP/1.1 Host: [-----].com Connection: keep-alive Cache-Control: max-age=0 If-Modified-Since: Tue, 16 Oct 2012 15:04:34 GMT User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11 Accept: text/css,*/*;q=0.1 Referer: http://[-----].com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en;q=0.8,en-US;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Response from IIS: HTTP/1.0 200 OK Cache-Control: no-cache, no-store Pragma: no-cache Content-Type: text/html; charset=utf-8 Expires: -1 Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Tue, 11 Dec 2012 11:57:03 GMT Connection: close Content-Length: 108837 Other servers with the same host that I'm running this site on all respond with HTTP/1.1. How can I persuade IIS to respond with HTTP/1.1 rather than HTTP/1.0? Edit to add: Digging deeper, I can see that some responses from the server are indeed being returned compressed, so I guess really I'm trying to work out why talking to this particular server from our office seems to result in it seeing 1.0 requests, while other servers at the same co-loc don't?

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Intel Ethernet Bottlenecking Internet?

    - by Donald Darma
    I'm having trouble with my internet speeds. So I just recent build a pc and everything is fine. I installed the Intel drivers and connected to the internet. It connects but I'm only half the speed I should be getting. My normal speed is 20mbps but speedtest.net is only showing 10. It can't be my ISP (which is TWC if anyone is asking) because my other devices like my laptop and my smartphone are showing 20 down. Heres my system: CPU: i5 4430 HSF: Stock cooler Mobo: Gigabyte Z87MX-D3H GPU: x2 MSI R7950-3GD5/OC BE RAM: Crucial Ballistix Tactical Tracer 8GB dual channel PSU: Silencer High Performance Power Supply 750 Watt 80+ (It's a subdivision of OCZ) HDD: Seagate Barracuda 7200RPM 3TB SSD: Samsung 840 Evo 120 GB Case: Corsair Obsidian 350D Edit: I am using the stock adapter that is on the motherboard. I know for a fact that the cable is good because I used it on my laptop and it ran fine. Its a CAT5E cable. I also ran IPERF and its giving me the same results, 10 mbps.

    Read the article

  • Administrative shares in Windows 7 Pro not visible

    - by Chris Tybur
    My desktop machine has a clean install of Windows 7 Professional. For some reason the standard administrative shares Admin$, C$, D$, etc are not visible, either in Computer Management - Shared Folders - Shares or via net share. I also have a laptop with a clean install of Windows 7 Professional, and I can see the admin shares in both places. As such, I can map to \\laptop\c$ from the desktop, but I can't map to \\desktop\c$ from the laptop. I pretty much took the defaults during the Windows 7 installations. I've tried adding LocalAccountTokenFilterPolicy to the registry on the desktop, but that didn't work. On the desktop I've also disabled UAC, turned off Windows firewall, removed it from a homegroup, made sure file and printer sharing is turned on, but nothing has worked. There is some subtle difference between the two machines that I can't seem to find. I'm logging into both machines using a local account that is in the Administrators group. Both accounts have the same name and password. I really don't want to have to create a new share for the desktop's C drive, especially since C$ is visible and working on the laptop and therefore I should be able to make it work on the desktop. Any idea why the admin shares would work on one machine and not another? Or why LocalAccountTokenFilterPolicy would fail?

    Read the article

  • Methods to transfer files from Windows server to linux server

    - by Raze2dust
    Hi, I need to transfer webserver-log-like-files containing periodically from windows production servers in the US to linux servers here in India. The files are ~4 MB in size each and I get about 1 file per minute. I can take about 5 mins lag between the files getting written in windows and them being available in the linux machines. I am a bit confused between the various options here as I am quite inexperienced in such design: I am thinking of writing a service in C#.NET which will periodically archive, compress and send them over to the linux machines. These files are pretty compressible. WinRAR can convert 32 MB of these files into a 1.2 MB archive. So that should solve the network transfer speed issue. But then how exactly do I transfer files to linux? I could mount linux drive on windows server using samba, or should I create an ftp server, or send the file serialized as a POST request. Which one would be good? Also, I have to minimize the load on the windows server. Mount the windows drive on linux instead. I could use the mount command or I could use samba here (What are the pros and cons of these two?). I can then write the compressing and copying part in linux itself. I don't trust the internet connection to be very stable, so there should be a good retry mechanism and failure protection too. What are the potential gotchas in these situations, and other points that I must be worried about? Thanks, Hari

    Read the article

  • Wordpress Forbidden page

    - by ffffff
    HTML without a body part is null If I read preview mode in (there is no authority) without logging in The response html is this.. <html xmlns="http://www.w3.org/1999/xhtml" lang="ja" xml:lang="ja"> <head profile="http://purl.org/net/ns/metaprof"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Script-Type" content="text/javascript" /> <meta name="generator" content="WordPress 2.9.2" /> <meta name="author" content="blog" /> <link rel="alternate" type="application/atom+xml" href="http://blog.example.com/feed/atom/" title="Atom cite contents" /> <link rel="start" href="http://blog.example.com" title="blog Home" /> <link rel="stylesheet" type="text/css" href="http://blog.example.com/wp-content/themes/blog/style.css" /> <meta name="description" content="blog" /> <title>blog - </title> </head> <body class="individual single"> </div> </body> </html> Do you have any solutions?

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Sparc Solaris 2.6 will not boot

    - by joshxdr
    I have a very old Sparc Solaris network that was working fine last week, but after a power outage none of the workstations will boot. The network looks like this: host A: solaris 2.6, shares /export/home to network by NFS host B: solaris 8, runs NIS server. Mounts /export/home/ by NFS. host C: RHEL5, shares /share to network by NFS. Mounts /export/home/ by NFS. I figured that the main problem was host A, since you need the home directories available for the other workstations to boot(?). Host A does not mount anything by NFS as far as I know. However, this workstation will NOT boot. The OBP bootup sequence looks like this: Boot device <blah> configuring network interface le0 Hostname <hostname> check file system <everything ok> check ufs filesystem <everything ok> NIS domainname is <name> starting router discovery starting rpc services: rpcbind keyserv ypbind done setting default interface for multicast: add net 224.0.0.0: gateway <hostname> <HANGS at this point> Is there some kind of debug mode so that I can get more detail as to why the workstation won't boot? Is my network structure inherently susceptible to power outage? Is there a way I can boot up to command line so I can at least turn off the NFS mounting?

    Read the article

  • Site Goes Offline Every Day At Midnight - No One Knows Why

    - by HollerTrain
    0 down vote favorite Seems today a website I manage has been going online and offline between 12a and 12:25a. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I have a pingdom account which alerts me when the site goes offline so we can see every day, like clockwork, the site goes on/off. At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • TEMP environment variable occasionally set incorrectly

    - by Roger Lipscombe
    Occasionally, I find my TEMP and TMP environment variables set to C:\Windows\TEMP. They should be set to %USERPROFILE%\AppData\Local\Temp, and are configured correctly in System Properties. This manifests itself as error messages like the following: ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\Windows\TEMP\gb_pz65v.0.cs' could not be found error CS2008: No inputs specified ...which occurs in various .NET applications (in particular Visual Studio 2010 or SQL Server Management Studio). Alternatively, SQL Server Management Studio will report: Value cannot be null. Parameter name: viewInfo (Microsoft.SqlServer.Management.SqlStudio.Explorer) If I run PowerShell elevated, then $env:TEMP is set correctly. If I run PowerShell non-elevated, then it's not. I believe that it should be set correctly in both cases. If not, it's the wrong way round. The same is true for CMD.EXE. Rebooting fixes it, temporarily, until something breaks it again. Presumably something loaded into Explorer.exe is messing with its environment variables, but what? The values in the registry are correct, even while this is happening: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment has TEMP = %SYSTEMROOT%\Temp HKCU\Environment has TEMP = %USERPROFILE%\AppData\Local\Temp By setting a breakpoint on shell32!RegenerateUserEnvironment, I'm able to trap it when it happens, but I still don't know why explorer.exe is reading the wrong environment variables. I can reproduce it consistently by broadcasting a WM_SETTINGCHANGE message (I wrote a one-line C++ program to do this). Watching the activity in Process Monitor shows that explorer.exe doesn't even look at HKCU\Environment. What is going on?

    Read the article

  • Lots of strange IP addresses in my Windows Firewall logs. Concern?

    - by gmoore
    Was trying to debug a Samba sharing issue with Mac OS X so I turned on logging for my Windows Firewall. I didn't expect a lot of conections but the thing filled up quickly. Here's a sample: 2009-12-21 08:49:32 OPEN-INBOUND TCP 192.168.0.4 192.168.0.3 56335 139 - - - - - - - - - 2009-12-21 08:49:33 OPEN-INBOUND TCP 192.168.0.4 192.168.0.3 56337 139 - - - - - - - - - 2009-12-21 08:50:02 OPEN UDP 192.168.0.3 68.87.73.242 1389 53 - - - - - - - - - 2009-12-21 08:50:02 CLOSE TCP 192.168.0.3 212.96.161.238 1391 80 - - - - - - - - - 2009-12-21 08:50:02 OPEN UDP 192.168.0.3 68.87.71.226 60290 53 - - - - - - - - - 2009-12-21 08:50:02 OPEN TCP 192.168.0.3 212.96.161.238 1391 80 - - - - - - - - - 2009-12-21 08:50:02 OPEN TCP 192.168.0.3 212.96.161.238 1393 80 - - - - - - - - - 2009-12-21 08:50:04 CLOSE TCP 192.168.0.3 212.96.161.238 1393 80 - - - - - - - - - 2009-12-21 08:50:41 CLOSE UDP 192.168.0.3 192.168.0.4 137 50300 - - - - - - - - - I can pick out the local IP addresses (192.168.0.3 is my Windows XP machine, 192.169.0.4 is Mac OS X) as I debug the Samba issue. But some of the others resolve to Comcast (my ISP) and others resolve to weird hosts like van-dns.com and navisite.net. It doesn't look like any connection sent/received any bytes. I used the reference here: http://technet.microsoft.com/en-us/library/cc758040%28WS.10%29.aspx. Is it a cause for concern?

    Read the article

  • How could one track all emails sent from employees?

    - by Schnapple
    My client runs a small business. This business has a small number of employees. For various reasons, my client would like to be able to have a copy of all of the emails sent from their employees BCC'd to them. The net effect here would be similar to the access they would have if they hosted their email through Exchange but the business is too small to make this a feasible option. They are currently hosted through GoDaddy. I have not investigated it myself personally but apparently GoDaddy can do something along these lines for all incoming email but not for outgoing email. Is there a way to set up email accounts for a particular domain to where a specified admin user could be copied on all outgoing email? UPDATE: I've modified the title to reflect that it's employees not just users who are the goal here. Also I forgot to mention how they currently do email through GoDaddy - POP3. I think maybe IMAP is also possible through GoDaddy, not sure. And yes, the bottom line here is to basically emulate a feature of a larger-class platform through a smaller, cheaper platform. Opinion-only answers should probably be relegated to the comments. For the sake of argument let's say that any legal requirements have been met.

    Read the article

  • Windows Server 2008 R2 Virtual Network Setup

    - by jpearl01
    Some background: I'm very much new to networking in general, and virtualization in particular. I'm trying to set up a series of VMs as we are transitioning to a thin client setup. I have been supplied a limited number of static ip addresses. The server is located in an offsite building which houses the network we use to connect to the internet, share folders etc. The setup I've been trying to go for is this: The host OS (Windows Server 2008 R2) is bound to one nic using one of the static ips (say, Nic1 and ip 10.255.6.61). I've set up another external virtual network attached to another physical nic , and a virtual private network attached to no nic. There is one VM running the same os (as the host). This VM is connected to both the external virtual network (and uses another static ip say Nic2 and ip 10.255.6.62) and also to the virtual private network (I gave it a static random ip 192.168.88.1 subnet mask 255.255.255.0). This virtual private network is connected to all the other VMs. I'd like to share the internet connection with all the other VMs on the private virtual network, and so I installed the RRAS role on the server connected to Nic2, and selected the option to share the internet over the vpn. I've run through the RRAS wizard a few times, trying different configurations, but none of them seem to be letting the other vms connect to the 'net. The vms seem to connect to the virtual private network fine, they are assigned an ip address and everything, but no internet, and no rest of the network either. The other problem is in general I connect to the vms with RDP. Will that be possible with a setup like this? i.e. will the vms show up as computers on the network? If not, what are my other options? Thanks! ~josh

    Read the article

  • Postfix additional transports - is it working?

    - by threecheeseopera
    I have enabled two additional transports in my postfix config to deal with recipient domains that demand connection limiting, per the instructions here at serverfault. However, I have no idea if this is working or not; in fact, I think it is not working, due to the send speeds I am seeing in the logs. How might I determine if my additional transports are working? If they aren't, do you have any tips on figuring out why? And, do you have any comments on my particular configuration? (am I a bucket of fail?) I have enabled the additional transports in master.cf: smtp inet n - - - - smtpd careful unix - - n - 10 smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 cautious unix - - n - - smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 I have set up the transport mapping file /etc/postfix/transport: hotmail.com cautious: yahoo.com careful: gmail.com cautious: earthlink.net cautious: msn.com cautious: live.com cautious: aol.com careful: I have set up the transport mapping and some connection-limiting settings in main.cf: transport_maps = hash:/etc/postfix/transport careful_initial_destination_concurrency = 5 careful_destination_concurrency_limit = 10 cautious_destination_concurrency_limit = 50 Finally, I have run converted the transport file to a db per the postfix docs: #> postmap /etc/postfix/transport And then restarted postfix. I do see my transport_maps setting when I run postconf, but I do not see any of the transport-specific settings ('careful_xxx_yyy_zzz'). Also the mail logs do not appear to be different in any way to what they were previously. Thanks!!!

    Read the article

< Previous Page | 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994  | Next Page >