Search Results

Search found 19061 results on 763 pages for 'load factor'.

Page 206/763 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • SQL Azure Service Issues &ndash; 10.27.2012 (Restored Now)

    - by ToStringTheory
    Please note that if you have a Windows Azure website, or use SQL Azure, your site may be experiencing downtime currently.  Notice I just called in regarding one of my public facing internet sites, because the site was failing to load anything but its error page, I couldn’t connect to the database to inspect application error logs, and the Windows Azure Management portal won’t load the SQL Azure extension. After speaking to the representative, he also mentioned that they were also having some problems updating the Service Dashboard which shows service up/down time, and for now, they are posting messages at http://account.windowsazure.com.  Please note that this issue may only be effecting certain regions.  Last, I may have misheard the representative, but he said that the outage was being categorized as a level 8, and if I heard correctly, I think he said that level 8 was the worst level.  I can’t say for sure on this though, because the phone connection to their support number was bad – large amounts of white noise. Good Luck! Update It appears that this outage may also be effecting the following services: SQL Database, Service Bus, Datamarket, Windows Azure Marketplace, Shared Caching, Access Control 2.0, and SQL Reporting. The note on the account page says for the South Central US region, however, I believe the representative I spoke to also mentioned North Central. As I said before though, the connection was bad. Update 2 My site regained connectivity about an hour ago, and it appears that the service dashboard is back in operation with correct status and history. It does appear that I misheard on the phone regarding multiple regions, so chances are this only effected a percentage of the platform. All in all, if this WAS their worst level of a problem, they really got it fixed and back up pretty fast. All in all, I understand that it is inherent for a complex system such as Azure to have ups and downs, but at the end of the day, I am still happy to support Azure to its fullest!

    Read the article

  • How to Properly Make use of Codeigniter's HMVC

    - by Branden Stilgar Sueper
    I have been having problems wrapping my brain around how to properly utilize the modular extension for Codeigniter. From what I understand, modules should be entirely independent of one another so I can work on one module and not have to worry about what module my teammate is working on. I am building a frontend and a backend to my site, and am having confusion about how I should structure my applications. The first part of my question is should I use the app root controllers to run modules, or should users go directly to the modules by urls? IE: in my welcome.php public function index() { $this->data['blog'] = Modules::run( 'blog' ); $this->data['main'] = Modules::run( 'random_image' ); $this->load->view('v_template', $this->data); } public function calendar() { $this->data['blog'] = Modules::run( 'blog' ); $this->data['main'] = Modules::run( 'calendar' ); $this->load->view('v_template', $this->data); } My second part of the question is should I create separate front/back end module folders -config -controllers welcome.php -admin admin.php -core -helpers -hooks -language -libraries -models -modules-back -dashboard -logged_in -login -register -upload_images -delete_images -modules-front -blog -calendar -random_image -search -views v_template.php -admin av_template.php Any help would be greatly appreciated.

    Read the article

  • Printing problem in Silverlight 4.0 RC - loading images in code behind

    - by Jacek Ciereszko
    Few days ago I faced a problem with printing in new Silverlight 4 RC. When you try to dynamically load image (in code behind) and print it, it doesn't work. Paper sheet is blank. Problem XAML file: <Image x:Name="image" Stretch="None" /> XAML.cs: image.Source = new BitmapImage(new Uri(imageUri, UriKind.RelativeOrAbsolute));  Print: var pd = new PrintDocument();   pd.PrintPage += (s, args) =>     {       args.PageVisual = image;     };   pd.Print(); Result: Blank paper.   Solution What you need to do, is forced Silverlight engine to load that image before printing start. To accomplish that I proposed simply checking PixelWith value. Your code will ask about PixelWidth of image so it will have to be loaded. XAML.cs: BitmapImage bImage = new BitmapImage(new Uri(imageUri, UriKind.RelativeOrAbsolute)); image.Source = bImage; InitControl(imageUri, movieUri, isLeft); int w = bImage.PixelWidth; int h = bImage.PixelHeight;   DONE!   Jacek Ciereszko

    Read the article

  • ODI 12c - Getting up and running fast

    - by David Allan
    Here's a quick A-B-C to show you how to quickly get up and running with ODI 12c, from getting the software to creating a repository via wizard or the command line, then installing an agent for running load plans and the like. A. Get the software from OTN and install studio. Check out this viewlet here for quickly doing this. B. Create a repository using the RCU, check out this viewlet here which uses the FMW Repository Creation Utility.  You can also silently create (and drop) a repository using the command line, this is really easy. .\rcu -silent -createRepository -connectString yourhost:1521:orcl.st-users.us.oracle.com -dbUser sys -dbRole sysdba -useSamePasswordForAllSchemaUsers true -schemaPrefix X -component ODI -component IAU  -component IAU_APPEND  -component IAU_VIEWER -component OPSS < passwords.txt where the passwords file contains info such as; sysdba_passwd newschema_passwd odi_user_passwd D workreposname workrepos_passwd  You can find details about the silent use of RCU here in the FMW documentation. C. Quickly create an agent for executing load plans and the like -  there is a great OBE for this, check it out here. If you are on your laptop and just wanting as minimal an agent as possible then this link is a must. With these three steps you are ready to get to the fun stuff! Check out more OBEs here - keep on the lookout for more!

    Read the article

  • OSX Menu bar doesn't appear til opening an application

    - by gms8994
    When I boot my MBP, the menu bar doesn't appear. When I open Mail.app or Safari, the menu bar appears. I've searched for a bit, and nothing seems to talk about this. Is there a way to fix this? UPDATE From the Console logs: 3/29/12 7:05:10.037 AM com.apple.launchctl.Aqua: load: option requires an argument -- D 3/29/12 7:05:10.037 AM com.apple.launchctl.Aqua: usage: launchctl load [-wF] [-D <user|local|network|system|all>] paths... 3/29/12 7:05:10.100 AM com.apple.launchd.peruser.501: (com.apple.launchctl.Aqua[153]) Exited with code: 1

    Read the article

  • Sar data not collected for 10 minutes

    - by Ichorus
    We have a RedHat server whose only job is to run a JBoss server. Monitors said that memory usage spiked (we have the JVM limited to far less than the total memory on the system) and JBoss crashed. We restarted and everything seems ok now. Odd thing is that sar data for 10 min leading up to the crash is simply not there. Load average got up to the 50s. I have seen severely busy systems (350+ Load Average) still collect sar data. Does anyone have any idea what could cause sar to stop collecting data?

    Read the article

  • Using XML in a Flex Website to Improve SEO

    - by Laxmidi
    Hi, I've got a Flex 3 site called www.brainpinata.com that's a trivia game. Basically, everything in the site is pulled from a database-- the questions, choices, and answers. So, unfortunately, Google doesn't index my content. So, I'm trying to think of ways to improve the situation: A) If I took my database data and put it in an XML file which was in the website's root directory, would this work? Would it violate any Google policy? (The info would be the same as in the db-- so nothing shady.) Would I have to "wire" the XML into my site or would it be enough to just have the XML sitting in the root directory? B) Another idea is to use the noscript tag and load the XML content there. As I understand it Google indexes content that people who have Javascript turned off would see. I know Flex/Actionscript 3, and unfortunately, I don't know how to load XML content with HTML. Does anyone know of an example where a Flex site uses XML for the noscript content? Thank you. -Laxmidi

    Read the article

  • XNA 4.0: Problem with loading XML content files

    - by 200
    I'm new to XNA so hopefully my question isn't too silly. I'm trying to load content data from XML. My XML file is placed in my Content project (the new XNA 4 thing), called "Event1.xml": <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="MyNamespace.Event"> // error on this line <name>1</name> </Asset> </XnaContent> My "Event" class is placed in my main project: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; namespace MyNamespace { public class Event { public string name; } } The XML file is being called by my main game class inside the LoadContent() method: Event temp = Content.Load<Event>("Event1"); And this is the error I'm getting: There was an error while deserializing intermediate XML. Cannot find type "MyNamespace.Event" I think the problem is because at the time the XML is being evaluated, the Event class has not been established because one file is in my main project while the other is in my Content project (being a XNA 4.0 project and such). I have also tried changing the build action of the xml file from compile to content; however the program would then not be able to find the file and would give this other warning: Project item 'Event1.xml' was not built with the XNA Framework Content Pipeline. Set its Build Action property to Compile to build it. Any suggestions are appreciated. Thanks.

    Read the article

  • How much traffic a linux-based shaper would be able to chew

    - by facha
    Hi, everyone I have a linux based traffic shaper (iptables + tc htb policy). It works in bridge mode. Shapes traffic based on IPs and ports (there are about 100 rules in the "mangle" chain of iptables). Right now its throughoutput is about 100 mb/s (I don't remember pps, there are about 800 users in the network). Just was wondering - when I will hit the limit. How much traffic could a linux-based shaper possibly get throuhg it. If you have one under heavy load, please could you write what machine you use and what load there is. Or if you have any other info about the subj, please write as well. Thanks in advance.

    Read the article

  • BI Applications 7.9.6.3 and EBS 12.1.3 Vision: Integrated Demo Environments

    - by Mike.Hallett(at)Oracle-BI&EPM
    If you need a combined BI-Applications + eBusiness Suite Applications demonstration environment, or for proof of concept work for your customers, then these versions of images on Oracle Virtual Box are now available for partners to download and use.  To get access to these images, Partners must be OPN members, specialised in OBI or BI-Apps.   This is an integrated Demo/Test Drive/POC/Self Enablement environment including two separate images (in English) representing the entire Oracle Stack – Applications, Middleware, Database, Operating System and Virtual Machine. Minimum Hardware requirements for each image to run separately 4GB RAM Minimum Hardware requirements for both images to run concurrently 8GB RAM Dual CPU 64 bit OS   BI Applications 7.9.6.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: BI Application Analytics demo data extracted from EBS 12.1.3 Vision for Financials and HR using EBS 12.1.3 Vision (image supplied) Built Integration to EBS 12.1.3 Vision image (provided). Fully functional BI Applications 7.9.6.3 software install and configuration Image can be connected to load any data from any other compatible source system. BI Apps Demo data is based on OOTB EBS Vision 12.1.3 Configured to run BI Apps data load for all other modules of EBS 12.1.3 Vision. Includes OBIEE Sample demonstration content Documented scripts for running presentations, demonstrations and Test Drives Image Size: 34GB zip, 84GB unzip.  Min Hardware 4GB RAM         EBS Vision 12.1.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: eBusiness Suite (EBS) Applications Vision 12.1.3 Standard Vision instance with all given setups, configurations and data Source system for BI Apps 7.9.6.3 Image Size: 76GB zip, 300GB unzip.  Min Hardware 4GB RAM Distribution: The Virtual Box images are posted on an external FTP server @ BI Applications 7.9.6.3 EBS12.1.3   To download, Partners need to request the current password to access the images.  To request the current ftp.oracle.com password and the password required to unzip the images, please email Marek Winiarski   Support Contact =  Marek Winiarski: Oracle Partner Solution Consultant

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

  • Can AJAX in a CMS slow down your server

    - by Saif Bechan
    I am currently developing some plugins for WordPress, and I was wondering which route to take. Let's take an example, you want to display the last 3 tweets on your page. Option 1 You do things the normal way inside WordPress. Someone enters the website, while generating the page, you fetch the tweets in php via the twitter api, and just display them where you want. Now the small problem with this is, that you have to wait for the response from twitter. This takes a few ms. NO real problem, but this is question is just out of curiosity. Option 2 Here you don't do anything in WordPress on the initial load, but you do have the API inside. Now you just generate the page, and as soon as the page is done on the client side, you do a small AJAX call back to the server via a WordPress plugin, to fetch your latest tweets. Also called asynchronously. Now the problem with this IMO is that you have much more stress on your server. For starters you have two HTTP requests instead of one. Secondly the WordPress core has to load two times instead of one. Other options Now I know there are a lot of other options: 1) Getting the tweets directly via javascript, no stress on the server at all. 2) Cache the tweets so they are fetched from the DB instead of using the API every time. 3) Getting the tweets from an ajax call that is not a WordPress plugin. 4) Many more. My Question Now my question is if you only compare 1 and 2, which would be a better choice.

    Read the article

  • How to migrate part of an SVN repository?

    - by dehmann
    How do you migrate a part of an SVN repository into a new repository? To migrate the contents of a complete SVN repository into a new repository, one has to dump the old repository first: svnadmin dump /path/to/repository > repository-name.dmp and then load it into the new one using svnadmin load. But I'm not sure how to just migrate a part. Do I still have to dump the whole thing? Do I grep for the part that I want? To just dump myproject, I tried this, but it didn't work: svnadmin dump /path/to/repository/myproject Any ideas?

    Read the article

  • XP Computer won't start (Missing/Corrupt 'System' file) - recently added new hard drive

    - by qwerty2
    Hi all, Pulling my hair out here. I recently replaced my D: 1TB drive (not a system drive) with a new 1.5TB drive. I loaded Windows XP, formatted the new drive and it was showing as working fine, alongside my C: windows system drive. I restart my machine and all of a sudden, Windows doesn't load and instead I get: "Windows could not start beause the following file is missing or corrupt" \WINDOWS\SYSTEM32\CONFIG\SYSTEM I don't have the original XP installation CD, although I do have another copy of XP, when I try and boot to it, I get the blue 'STOP' screen after it attempts to load the setup utlity for about a minute. Can someone please help? When I set up my new hard drive as a primary partition did this someone screw up my C: hard drive? Did it perhaps unmount it somehow? Any help would be fantastic. Thanks

    Read the article

  • XP Computer won't start (Missing/Corrupt 'System' file) - recently added new hard drive

    - by qwerty2
    Hi all, Pulling my hair out here. I recently replaced my D: 1TB drive (not a system drive) with a new 1.5TB drive. I loaded Windows XP, formatted the new drive and it was showing as working fine, alongside my C: windows system drive. I restart my machine and all of a sudden, Windows doesn't load and instead I get: "Windows could not start beause the following file is missing or corrupt" \WINDOWS\SYSTEM32\CONFIG\SYSTEM I don't have the original XP installation CD, although I do have another copy of XP, when I try and boot to it, I get the blue 'STOP' screen after it attempts to load the setup utlity for about a minute. Can someone please help? When I set up my new hard drive as a primary partition did this someone screw up my C: hard drive? Did it perhaps unmount it somehow? Any help would be fantastic. Thanks

    Read the article

  • Virtual box lost files

    - by Paul Lloyd
    I have been running Virtual Box on a Macbook Pro for a year or more, with an old version of Windows XP running on the virtual machine. Recently my Mac battery dies completely and I had left VB / Windows running. When I re-started the Mac, VB would not load. I reinstalled a recent download of VB from a DMG file I had stored. Now VB starts but when I try to start the virtual machine I get the following message, Failed to open a session for the virtual machine Windows XP. Failed to load VMMR0.r0 (VERR_SUPLIB_WRITE_NON_SYS_GROUP). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} I have been backing up the MAC using Time Machine and have backups going back months. I really need to access the windows files as they have my Accounts and other business critical stuff. Any ideas please, or does anyone know where I can get some support for this combination of hardware / software, apologies I am a novice in this area. Thanks in advance.

    Read the article

  • Resolution independence - resize on the fly or ship all sizes?

    - by RecursiveCall
    My game relies heavily on textures of various sizes with some being full-screen. The game is targeted for multiple resolutions. I found that resizing textures (downsizing) works quite well for this game’s art type (it’s not Pixel Art or anything like that). I asked my artist to ensure that all textures at the edges of the screen to be created in such a way that they can safely “overflow” off screen; this means that aspect ratio is not an issue. So with no aspect ratio issues, I figured that I would simply ask my artist to create assets in very high resolution, and then resize them down to the appropriate screen resolution. The question is, when and how do I do that? Do I pre-resize everything to common resolutions in Photoshop and package all assets in the final product (increasing the size download that the user has to deal with) and then select the appropriate asset based on the detected resolution? Or do I ship with the largest set of Textures, detect the resolution on load, set a render target and draw all downsized assets to it and use that? Or for the latter, do I use some sort of a CPU-sided algorithm to resize on game load?

    Read the article

  • Windows Photo Viewer needs more ram?

    - by Aren B
    Ok, so i went to open a picture with the Windows Photo Viewer (Default) application and it told me this: Windows Photo Viewer can't display this picture because there might not be enough memory available on your computer. Close some programs that you aren't using or free some hard disk space (if it's almost full), and then try again. So looking at my 98% ram usage (thankyou VisualStudio x8 + SQL Server) I rebooted my computer. Now this is my load: And this is my hard-disk loadout: So now I go to load up that image again. SAME MESSAGE, what the heck? So apparantly 6gb isn't enough ram to open a 29k image that loads perfectly fine in MSPaint, Paint.NET, Photoshop It's a .png and it's not corrupt. So my question is: what gives?

    Read the article

  • Cannot access a web page?

    - by ipkiss
    Hello all, Recently, I could not access the webpage bbc.co.uk anymore, while I can access other websites smoothly. Ar first, I though there may be some problem with my laptop. However, if I use my laptop through my company network, I can load the page bbc.co.uk normally. Then, I though maybe my ADSL at home blocks that web address. However, I tried another laptop with my home ADSL and it can load the page bbc.co.uk very fast. Now I do not know what could be the problem. Can anyone tell me please? Thank you.

    Read the article

  • Cannot access a web page?

    - by user40748
    Hello all, Recently, I could not access the webpage bbc.co.uk anymore, while I can access other websites smoothly. Ar first, I though there may be some problem with my laptop. However, if I use my laptop through my company network, I can load the page bbc.co.uk normally. Then, I though maybe my ADSL at home blocks that web address. However, I tried another laptop with my home ADSL and it can load the page bbc.co.uk very fast. Now I do not know what could be the problem. Can anyone tell me what could cause the issue, please? Thank you.

    Read the article

  • Cannot access a web page?

    - by ipkiss
    Hello all, Recently, I could not access the webpage bbc.co.uk anymore, while I can access other websites smoothly. Ar first, I though there may be some problem with my laptop. However, if I use my laptop through my company network, I can load the page bbc.co.uk normally. Then, I though maybe my ADSL at home blocks that web address. However, I tried another laptop with my home ADSL and it can load the page bbc.co.uk very fast. Now I do not know what could be the problem. Can anyone tell me what could cause the issue, please? Thank you.

    Read the article

  • Varnish does not recognize req.hash

    - by Yogesh
    I have Varnish 3.0.2 on Redhat and service varnish start fails after I added vcl_hash section. I did varnishd and then loaded the vcl using vcl.load vcl.load default default.vcl Message from VCC-compiler: Unknown variable 'req.hash' At: ('input' Line 24 Pos 9) set req.hash += req.url; --------########------------ Running VCC-compiler failed, exit 1 cat default.vcl backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { if( req.url ~ "\.(css|js|jpg|jpeg|png|swf|ico|gif|jsp)$" ) { unset req.http.cookie; } } sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host; if( req.httpCookie == "JSESSIONID" ) { set req.http.X-Varnish-Hashed-On = regsub( req.http.Cookie, "^.*?JSESSIONID=([a-zA-z0-9]{32}\.[a-zA-Z0-9]+)([\s$\n])*.*?$", "\1" ); set req.hash += req.http.X-Varnish-Hashed-On; } return(hash); } What could be wrong?

    Read the article

  • Ubuntu 9.0.4 Presario S4000NX Fan Speed

    - by Chris C
    I recently install Ubuntu 9.0.4 on a Presario S4000NX and the CPU fan speed is kept at max. With Windows XP installed the fan speed would increase/decrease as required. I've tried to install lm-sensors and run the sensors-detect. It recommended that I load the modules which I did: smsc47m192 i2c-i801 When running sensor-detect it gave me this strange message: Trying family SMSC Found SMSC LPC47M15x/192/997 Super IO Fan Sensors (but not activated) Running the sensors command gives me a list of voltages and CPU and temperature but doesn't list any fans. After doing some Internet research I then tried to load the smsc47m1 module but I get the following error: FATAL: Error inserting smsc47m1 (/lib/modules/2.6.28-15-generic/kernel/drivers/hwmon/smsc47m1.ko): no such device The file smsc47m1.ko does exist in the listed folder. Any suggestions for getting the fan speed (and the noise) down in Ubuntu? Thanx. - Chris P.S. - I would have put better tags but Server Fault wouldn't let me.

    Read the article

  • The Hot-Add Memory Hogs

    - by Andrew Clarke
    One of the more difficult tasks, when virtualizing a server, is to determine the amount of memory that Hypervisor should assign to the virtual machine. This requires accurate monitoring and, because of the consequences of setting the value too low, there is a great temptation to err on the side of over-provisioning. This results in fewer guest VMs and, in fact, with more accurate memory provisioning, many virtual environments could support 30% more VMs. In order to achieve a better consolidation (aka VM density) ratio, Windows Server 2008 R2 SP1 has introduced what Microsoft calls ‘Dynamic Memory’. This means that the start-up RAM VM memory assigned to guest virtual machines can be allowed to vary according to demand, changing dynamically while the VM is running, based on the workload of applications running inside. If demand outstrips supply, then memory can be rationed according to the ‘memory weight’ assigned to the guest VM. By this mechanism, memory becomes a shared resource that can be reallocated automatically as demand patterns vary. Unlike VMWare’s Memory Overcommit technology, the sum of all the memory allocations to each virtual machine will not exceed the total memory of the host computer. This is fine for applications that are self-regulating in their demands for memory, releasing memory back into the 'pool' when not under peak load. Other applications however, such as SQL Server Standard and Enterprise, are by nature, memory hogs under high workload; they can grab hot-add memory whilst running under load and then never release it. This requires more careful setting-up and the SQLOS team have provided some guidelines from for configuring SQL Server in virtual environments. Whereas VMWare’s Memory Overcommit is well-proven in a number of different configurations, Hyper-V’s ‘Dynamic Memory’ is new. So far, the indications are that it will improve the business case for virtualizing and it is probably a far more intuitive technology for the average IT professional to grasp. It is certainly worth testing to see whether it works for you.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >