Search Results

Search found 8020 results on 321 pages for 'webcenter sites'.

Page 273/321 | < Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >

  • Connecting to same public IP from different locations yields different results

    - by DHall
    Since yesterday I've been unable to access one of my favorite time-wasting sites, boston.com. It starts to load but then it gets redirected to pagesinxt or something like that. After some investigation, I've narrowed it down to an issue with cache.boston.com, but only from my work location. I found the IP (216.38.160.107) , but even that doesn't work correctly from here at work. When I do a telnet 216.38.160.107 80 GET http://cache.boston.com/universal/css/hp_bgcom.css from another location, I get a nice long CSS, as expected. From here, I get an error (trimmed for size): HTTP/1.1 400 Bad Request Your request could not be processed. Request could not be handled This could be caused by a misconfiguration, or possibly a malformed request. For assistance, contact your network support team. Is there any way I can troubleshoot this further on my end? Tracert doesn't tell me anything too useful: Tracing route to vwrpx1.ttn.xpc-mii.net [216.38.160.107] over a maximum of 30 hops: 1 * * * Request timed out. Since it's not really work-related, I don't really want to bring it up to our network team unless I know what's going on, or if there's some risk to the network (ex. malware or something)

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. What is the mistake if any one can point out in above configuration, or else I need to check any thing else? sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK I am using Apache2 on Ubuntu 12.04

    Read the article

  • Firefox window disappears

    - by Lord Torgamus
    Now this is odd. At some point in the last half hour, my Firefox window disappeared. I didn't notice, as I was working in another program at the time. No Firefox icon shows up with Alt-Tab, and no Firefox listing shows up under the Applications tab in the task manager. There is a Firefox entry under the Processes tab. Normally, I probably wouldn't have noticed, just opened Firefox up again, but I'm listening to an Internet radio station and the stream never stopped. When I did open a new Firefox window, it showed up in the Task Manager's applications tab. I'm running Windows XP, and my Firefox has the add-ons Adblock Plus, BetterPrivacy, Cert Viewer Plus, DOM Inspector, Firebug, Greasemonkey, Java Quick Starter, Live HTTP headers, Microsoft .NET Framework Assistant, NoScript, WebDeveloper and XPather. The radio station is Slacker; it's never given me any trouble before, and I've been using it for months. I don't think there was anything unusual in my open tabs; just a few static pages at non-sketchy sites like Java APIs, plus GMail and the aforementioned Slacker. Googling brought up a handful of similar-but-not-quite-the-same errors, none of which had useful resolutions. Does anyone know how to bring that window back and/or prevent this from happening again?

    Read the article

  • Nginx proxy upstream cached?

    - by Julian H. Lam
    Attempting to resolve an issue that's been annoying me for a bit. I've distilled the symptoms into a set of reproducible steps: I have two sites, siteA, and siteB. They are both Node.js applications running on different ports (for the sake of example, 4567 and 4568) Both applications have their own file in sites_available (plus a symlink from sites_enabled), which contain the directives proxy_pass http://node_siteA/ and proxy_pass http://node_siteB/ respectively, inside of a location block. They also each have an upstream block (defined globally?): upstream node_siteA { upstream node_siteB { server 127.0.0.1:4567; server 127.0.0.1:4568; } } Site A and Site B have nothing to do with each other. Yes, I am restarting (reloading, actually) nginx every time I make a change. If I take down site B and attempt to access it via the web, I am served site A. Why is this? Thoughts Other times, when I create a new Site C, for example, nginx refuses to show me anything except "Welcome to nginx!" for ~5 minutes. This suggests a resolver timeout, perhaps? When I access Site B after its config has been deleted, and it sends me to Site A, this sounds like nginx sending me to servers in a round-robin fashion...

    Read the article

  • Firefox will not remember local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

  • Cisco RV042 VPN with Dynamic IPs - Remote Gateway Not Resolving

    - by Rister
    I have an existing network setup that I inherited from my predecessor. Currently there are two sites, each with a Linksys RV042 VPN router running the 1.3.12.19-tm firmware. They are currently set up with a Gateway to Gateway VPN. One site has a static IP, the other has a Dynamic IP with a hostname set up on no-ip.com. My company is looking to set up another site so I purchased another RV042 only this one was Cisco branded and it is running the latest firmware. I had assumed that I would be able to configure a vpn from our main office (the dynamic ip) to the new site with this router quite easily. However when I set up a new VPN tunnel on either device, it stays on Waiting for Connection and the Remote Gateway shows an ip address of 0.0.0.0 rather than the remote ip address. The other VPN tunnel is still working and I don't see any obvious misconfiguration on the new router. It seems that the router is not resolving the Dynamic DNS address and therefore not giving me the option to connect the VPN. Does a Gateway to Gateway VPN work with Dynamic IP addresses on each end? Are the firmware versions not compatible? Is there something I've missed?

    Read the article

  • Deploying a Django application in a virtual Ubuntu Server

    - by mfsaint
    I have a virtualbox machine running Ubuntu Server 10.04LTS. My intention is to this machine to work like a VPS, this way I can learn and prepare for when I get a VPS service. Apache+mod_wsgi for deploying the Django app seems the right choice to me. I have the domain (marianofalcon.com.ar) but nothing else, no DNS. The problem is that I'm pretty lost with all the deployment stuff. I know how to configure mod_wsgi(with the django.wsgi file) and apache(creating a VirtualHost). Something is missing and I don't know what it is. I think that I lack networking skills ant that's the big problem. Trying to host the app on a virtualbox adds some difficulty because I don't know well what IP to use. This is what I've got: file placed at: /etc/apache2/sites-available: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName www.my-domain.com ServerAlias my-domain.com Alias /media /path/to/my/project/media DocumentRoot /path/to/my/project WSGIScriptAlias / /path/to/your/project/apache/django.wsgi ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> django.wsgi file: import os, sys wsgi_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.dirname(wsgi_dir) sys.path.append(project_dir) project_settings = os.path.join(project_dir,'settings') os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • No LAN and SMB access, and Explorer not responsive, when using a second connection

    - by Lorenzo
    I apologize if this is a duplicate question, I know that there are several questions about multiple connection (LAN + LAN and LAN + dialup) but I haven't been able to find one that fits my scenario. I'm still using Windows XP on my corporate laptop, and I'm connected to the corporate LAN via Ethernet. The LAN NIC has a public IP address, although not accessible externally, obtained via the corporate DNS server. This connection is firewalled and requires a proxy to access Internet. To access Internet sites blocked by the corporate firewall, I use my smartphone via USB tethering. It is seen as a new LAN interface, and I get a private IP address (class 192.168..). There are two problems: The LAN is not accessible, as the default gateway goes to the tethering NIC. I'd like to solve this, but I can live with it. My PC becomes unresponsive if I use Windows Explorer to view local files, or even when I open the start menu. I guess that this is caused by attemps to connect to a mapped network drive. But I disabled the "Client for Microsoft Networks" in the tethering NIC. Why the system still hangs? Of course if I disable the Ethernet NIC, Explorer stops hanging. If you need further details, add a comment. Thanks!

    Read the article

  • mod_fcgi in virtualmin: graceful kill fail, sending SIGKILL?

    - by mgjk
    Yesterday around 1am, our server ground to a crawl. This doesn't happen often, but I'm trying to get to the bottom of it. There is no unusual traffic volume, no unusual processes running, just all of the sudden the server started killing fcgid processes. [Thu Aug 02 01:17:32 2012] [warn] mod_fcgid: process 26460 graceful kill fail, sending SIGKILL ... for as many fcgid processes as we have... CPU idle fell to 0% and I/O seemed to take up most of the load. The issue lasted about 5 minutes. I suspect there was some swap activity, although I'm not sure if it was due to killed processes being swapped in to die, or if it was because some process ramped up memory usage faster than my process watching scripts can see them. The oom-killer wasn't triggered (at least it's not logged), so I think this was Apache for some reason restarting the processes. This is not regular, and nothing obvious appears in cron. Is there a normal Apache process which might cause this? We run dozens of different sites, and it was late at night, so volume was very, very low. (maybe 200 requests in a 10 minute period).

    Read the article

  • Internal Code Signing: Key Distribution, or Certificate Server?

    - by Myrddin Emrys
    I should first note that we have nobody in IT with significant familiarity with self-signed certification. We have a moderately sprawling network (one forest, many locations), and we are now rolling out internal code signing; until now users have run untrusted code, or we even disabled(!) the warnings. Intranet applications, scripts, and sites will now be signed with self certification. I am aware of two obvious ways we can deploy this: Distributing the keys directly via a group policy, and setting up a cert server. Can someone explain the trade-offs between these two methods? How many certs before the group policy method is unwieldy? Are they large enough that remote users will have issues? Does the group policy method distribute duplicates on every login? Is there a better method I am not aware of? I can find a lot of documentation on certifications and various ways to create them, but I have not been able to find something that summarizes the difference between the distribution methods and what criteria make one or the other superior.

    Read the article

  • How can the route between two private IPs go via public IPs?

    - by Gilles
    I'm trying to understand what this output from traceroute means. I changed the IP addresses for privacy but retained the public/private IP range distinction. traceroute.db -e -n 10.1.1.9 traceroute to (10.1.1.9), 30 hops max, 60 byte packets 1 10.0.0.1 0.596 ms 0.588 ms 0.577 ms 2 10.0.0.2 1.032 ms 1.029 ms 1.084 ms 3 10.0.0.3 3.360 ms 3.355 ms 3.338 ms 4 23.0.0.4 3.974 ms 4.592 ms 4.584 ms 5 23.0.0.5 13.442 ms 13.445 ms 13.434 ms 6 45.0.0.6 13.195 ms 12.924 ms 12.913 ms 7 67.0.0.7 52.088 ms 51.683 ms 52.040 ms 8 10.1.1.8 46.878 ms 44.575 ms 44.815 ms 9 10.1.1.9 45.932 ms 45.603 ms 45.593 ms The first 10.0.* range is inside my organisation. The last 10.1.* range is another site of my organisation. The intermediate addresses belong to various ISPs. I expect that there is some kind of VPN between the two sites, but I don't know much about our network topology. What I don't understand is how the route can go from a private address through public addresses back into private addresses. Searching led me to Public IPs on MPLS Traceroute, which gives a possible explanation: MPLS. Is MPLS the only possible or most likely explanation? Otherwise what does this tell me about our network infrastructure? Bonus question for my edification: in this scenario, who is generating the ICMP TTL exceeded packets and if relevant mangling their source and destination addresses?

    Read the article

  • Apache Virtual Host with directory aliases

    - by brechtvhb
    I'm trying to set up a dynamic virtual host in apache with a directory alias pointing to a difirent path for every domain. Here's what I'm trying to achive. Say I have 2 domains: * www.domain1.com * www.domein2.com I want both to point to the same index.php file (C:/cms/index.php). Now the hard part ... I want directories or certain file types to point to a diffirent path for each domain. Example: * www.domain1.com/layout -> C:/store/www.domain1.com/layout * www.domain2.com/layout -> C:/store/www.domain2.com/layout * www.domain1.com/image.png -> C:/store/www.domain1.com/image.png * www.domain2.com/image.png -> C:/store/www.domain2.com/image.png However the admin directory should point to the same path again for all sites * www.domain1.com/admin -> C:/cms/admin * www.domain2.com/admin -> C:/cms/admin Is there a way to achieve this kind of behaviour in apache 2.2 without having to create a virtualhost entry for each new domain?

    Read the article

  • Windows 7 ICS client web failure

    - by n8wrl
    I have several windows 7 PC's connected on a LAN via a hub. One has a Verizon 3G connection and works great. I have internet connection sharing enabled on it, which automagically set the LAN connection to 192.168.137.1 and enabled DHCP. I am trying to get the client PC's working one at a time. The others are off. The client is able to: Get an IP via DHCP with correct settings. Ping any web address I can throw at it, so DNS and routing are working. Windows update works. But web sites hang in IE. All but google.com! I type www.msn.com, microsoft.com, amazon.com, etc. etc. All ping via a cmd window but IE just hangs - it says web site found but the green progress bar just slowly creeps and no content displays. www.google.com comes up even after clearing browser and dns cache. I am pulling my hair out - what am I missing? EDIT: After some more gyrations with a router I'm back to ICS. Same symptoms, only now I have an answer to Andrew's question, YES I can do Google searches but clicking on any of the result links hangs! Let one sit for half an hour with no timeout or error.

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • This web site needs a different Google Maps API key. A new key can be generated at http://code.googl

    - by MJI
    Apologies in advance if this is the wrong place to post. I tried searching for this issue and all that seemed to come up were questions posted from people who had this issue with their web pages. I couldn't find questions related to this issue from a laypersons perspective. I'm not a developer. I have no domain, nor wish to have one at the time. Rather I'm just a regular person who likes to upload photos to some photo related sites. My uploading process constantly gets interrupted by one of these annoying API errors. I get it at least two times, one when I click the page to upload, and also right after it has uploaded. It also pops up if I go to edit a photo or delete it. This interrupts my browsing experience until I click okay. I just want a fix for the annoying without having to register for a key. I tried before and it required a web domain. I rather not have to create a domain and go through such hoops just to fix this. Is there a solution for this problem that doesn't require registration? Another thing to note: I have used two computers. One has the message pop-up and the other doesn't. What is different about the two computers?

    Read the article

  • Isolating Apache virtualhosts from the rest of the system

    - by JesperB
    I am setting up a web server that will host a number of different web sites as Apache VirtualHosts, each of these will have the possibility to run scripts (primarily PHP, possiblu others). My question is how I isolate each of these VirtualHosts from eachother and from the rest of the system? I don't want e.g. website X to read the configuration of website Y or any of the server's "private" files. At the moment I have set up the VirtualHosts with FastCGI, PHP and SUExec as described here (http://x10hosting.com/forums/vps-tutorials/148894-debian-apache-2-2-fastcgi-php-5-suexec-easy-way.html), but the SUExec only prevents users from editing/executing files other than their own - the users can still read sensitive information such as config files. I have thought about removing the UNIX global read permission for all files on the server, as this would fix the above problem, but I'm not sure if I can safely do this without disrupting the server function. I also looked into using chroot, but it seems that this can only be done on a per-server basis, and not on a per-virtual-host basis. I'm looking for any suggestions that will isolate my VirtualHosts from the rest of the system. PS I'm running Ubuntu 12.04 server

    Read the article

  • IIS not responding with SSL Server Hello

    - by Damien_The_Unbeliever
    I'm having difficulty getting a particular IIS machine to "do" SSL. This is a test environment (one of many) which we've set up "the same" as we have many times previously, but it just doesn't seem to be working. The server is Windows Server 2003 (Version 5.2 (Build 3790.srv03_sp2_gdr.100216-1301 : Service Pack 2)) IIS is hosting 4 sites (including the default site), but only one site is configured for SSL. We're using the same SSL certificate we use on all of our other servers (it's a wildcard cert). (Don't think this is relevant, but including anyway) We've configured the site to require SSL, it has a subdirectory that doesn't require SSL but has an asp page that redirects into SSL. The 403;4 error page for the site is hooked up to this asp page (this is how we do our non-HTTPS into HTTPS transition). Using Microsoft Network Monitor (3.3), I've just run a session against a server where SSL is working. It can pull apart the SSL exchange as the following messages: SSL: Client Hello SSL: Server Hello. Certificate. Server Hello Done SSL: Client Key Exchange. Change Cipher Spec. Encrypted Handshake Message. SSL: Change Cipher Spec. Encrypted Handshake Message However, on our problem server, I only see: SSL: Client Hello. The next packet from the server (from port 443, so it's listening and responding there) contains only 60 bytes, and just seems to have the Tcp headers and not much else (but I'm by no means an expert at deciphering these things). So, where do I look next? Or what additional information do I need to add to this question, and where do I find it?

    Read the article

  • Any dangers in using DDR memory with a higher frequency than the FSB?

    - by raw_noob
    I'm looking to upgrade memory in an older motherboard. The processor is an AMD Sempron 2500+ with a maximum speed of 333/166MHz. The motherboard is an MSI MS-7061 (KV3M-V), which accepts up to 2Gb of DDR memory maximum PC2700 in 2 slots and has a maximum FSB of 333MHz. The board does not have dual-channel support. Existing memory includes a stick of 512Mb PC3200, which seems to be running OK (presumably at PC2700) but is rated 200MHz, which is below the FSB speed. The other stick is 256Mb PC2100/133MHz, again below the FSB speed. (All figures from CPU-Z.) I have a chance to acquire a single used stick of PC3200/400MHz memory very cheaply. Crucial's system scanner seems to suggest that this will be OK with my system, but other sites have suggested that running memory with a higher frequency than the FSB can cause instability. Is this true? Would I be better waiting until I can buy the correct PC2700/333MHz stick? I'm assuming that the mixed memory I have at present is running as 768Mb at 133MHz. Is this a reasonable assumption? If so, would you expect the performance differences between 768Mb/133MHz and 1Gb/333MHz to be very noticeable? If I install the new 1Gb/400 or 333MHz stick in slot 1, am I right in thinking that adding back the existing 512Mb/200MHz stick in slot 2 would pull the whole 1.5Gb system memory speed down to 200MHz? If so, which would be better - 1.5Gb/200MHz, or the single 1Gb stick at the full 333MHz that the FSB permits? Is more headroom more important than extra speed? Any help - or even opinions - gratefully received. I can't find reliable information, and I can't afford to make expensive mistakes.

    Read the article

  • Could hybrid SSD + HDD be made with fixed internal partitions?

    - by Aaron
    I was pretty close to getting Seagate's Momentus XT but have been scared off by the many problems reported on forums and feedback sites, especially in Mac Book Pros. So I'm waiting for mk 2 with some extra flash and better reliablilty I'm assuming will come out this year. What would suit me better though is a 32+500 hybrid drive where I have more control over what is on the flash drive and what is on the disk drive. So there are 2 physical partitions within the one 2.5" hard drive enclosure which use different media internally (32GB for core files and 500GB for data and multimedia). The partitions would be locked so they can't be changed. - Or even better, the disk driver just makes them appear as two disks to the OS that share the same bus... Perhaps it's ok if the bios just sees the first drive until the OS is loaded. Is either of it technically possible? Obviously difficult to market outside of the enthusiast market. The SSD memory modules can be pretty small right, so they could even make them a card that plugs into a secondary connection on the enclosure. That would be good for computer builders as well as for upgrading and recoverability. Then future operating systems could recognise these system SSD drives and automatically install the OS + swap files on it. While placing document libraries on the larger data drive. While in the longer term HDD will probably disapear there will always be a trade off between speed, storage size and expense.

    Read the article

  • Trouble with Russian pc's on my wifi

    - by hogni89
    I have created a WiFi hotspot for the local community. The problem is, some Russian PC's (Windows XP, Windows Vista and Windows 7) can't get internet connection (We have a lot of bypassing russian fishing vessels / cargo ships). The pc's obtain a valid IP address, and some of them can even manage to send some few packages - But none of them are usable on the network. They all say "Limited internet access" or "???????????? ?????? ? ????????". The thing these PC's have in common is, that they all run a Russian installation of Windows. No one else has problems with the WiFi hotspot - Danish and English Windows, Linux and OS X all work like a charm. Can it be, that there is a difference between the danish / english windows installation, compared to the Russian installation? EDIT START They can't ping the router (One PC got one response - ONCE), they can't access any sites and Windows newer asks "Is this network public, home or work?". EDIT END PS: The hotspot is a airMAX rocket M from Ubiquiti Networks, Inc (www.ubnt.com)

    Read the article

  • IIS7 can't read web.config on shared Mac filesystem

    - by RobG
    I'm running a VirtualBox virtualized Windows 2008 Server on my Mac, just finished setting it up today. On it, I have SQL Server 2008, IIS and ColdFusion 9. I want to serve websites from my Mac filesystem (for development purposes). So I created a new website in IIS and pointed it at the appropriate path using a UNC path: \vboxsvr\rob\Sites\testsite, which contains the ColdFusion code and a web.config file. When I attempt to modify the file at all, or view the site in a web browser, I get an error: HTTP 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. I did some Googling, and found several similar problems, but nothing exactly like I have. The closest one seemed to indicate permissions. So I recreated the site and set it up to allow the Administrator (in Windows) to access the stuff. That didn't help. I can read/modify the files just fine from within Windows, but IIS itself can't seem to do it. What do I need to do to fix this? Thanks!

    Read the article

  • Recommended offline on-demand virus scanners

    - by ashh
    I have never run full anti-virus on my Windows XP systems. Instead I use various anti-malware tools to manually perform scans every few weeks. This approach, combined with Windows updates and general care about what web-sites I visit and what files I download has kept me 99% free of problems. The remaining 1% has occurred when I download files that I know may contain malware, but still decide the risk is worth it. When on 2 occasions in 10 years I did get caught doing this, I realised that being able to easily scan them would most likely have avoided getting infected. I don't need, or want, to run a "stay resident" anti-virus. Also, the online scanners such as Kaspersky etc limit uploads to small files, so these are not always useful. In summary I would like to simply be able to download a file and then manually initiate an on demand anti-virus scan, on the downloaded file only. I'm sure some/most Anti-Virus do both, however once again I don't really want to pay for or need the stay resident part. Any recommendations (commercial or free)? UPDATE: This is not an exact duplicate, nor a possible duplicate. I searched for and read other questions on anti-virus here at SuperUser and found none that answered my question. I am specifically asking about anti-virus scanners that run ON-DEMAND locally on the computer, not online scanners.

    Read the article

  • All network devices freezing when Airport Extreme Base Station is connected. Any ideas?

    - by Jon
    I've been troubleshooting this issue for a while, and through a series of events have it narrowed down to my airport extreme base station. I like this router, since I'm able to connect to IPV6 sites without any insane configuration (my alternate router is too old and doesn't support v6). My question is: Has anyone else had this issue, if so how is it resolved? If not, can you recommend a good IPv6 router? Here is how I came to the conclusion that it is the router: Devices: XBOX 360, HTC Incredible, Home-Built machine running FreeBSD, Home-Built machine running Ubuntu 10.04. 1.) Noticed freezing on Ubuntu Box. 2.) Noticed freezing on XBOX360 3.) Noticed freezing on HTC Incredible (only when connected to my network wirelessly). The above all happened at random times throughout the past few weeks. Over the last few days, I was playing XBOX and noticed that the XBOX and Ubuntu machines both froze. I picked up my phone, and it was also frozen. I reset all devices, power-cycled my router, and all was fine again. About two hours later, it happened again (I was playing Forza III, the XBOX froze; I went to the Ubuntu box and it was frozen; unfortunately, the HTC phone was not connected wirelessly, and the FreeBSD box was turned off). I can't even begin to imaging what a router could be doing to freeze devices with such differing hardware/software/OS, and I feel absurd for coming to this conclusion, but I have nothing else. I hooked up my archaic Netgear router, and have had no problems since. :(

    Read the article

< Previous Page | 269 270 271 272 273 274 275 276 277 278 279 280  | Next Page >