Search Results

Search found 50150 results on 2006 pages for 'page search'.

Page 810/2006 | < Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >

  • Is this the correct approach to an OOP design structure in php?

    - by Silver89
    I'm converting a procedural based site to an OOP design to allow more easily manageable code in the future and so far have created the following structure: /classes /templates index.php With these classes: ConnectDB Games System User User -Moderator User -Administrator In the index.php file I have code that detects if any $_GET values are posted to determine on which page content to build (it's early so there's only one example and no default): function __autoload($className) { require "classes/".strtolower($className).".class.php"; } $db = new Connect; $db->connect(); $user = new User(); if(isset($_GET['gameId'])) { System::buildGame($gameId); } This then runs the BuildGame function in the system class which looks like the following and then uses gets in the Game Class to return values, such as $game->getTitle() in the template file template/play.php: function buildGame($gameId){ $game = new Game($gameId); $game->setRatio(900, 600); require 'templates/play.php'; } I also have .htaccess so that actual game page url works instead of passing the parameters to index.php Are there any major errors of how I'm setting this up or do I have the general idea of OOP correct?

    Read the article

  • Change XAMPP's htdocs web root folder to another one

    - by vitto
    I'm trying to change the XAMPP's web root default directory /opt/lampp/htocs to another one like /home/me/Dropbox/public_html without success. I've edited the file /opt/lampp/etc/httpd.conf # old line: DocumentRoot "/opt/lampp/htdocs" DocumentRoot "/home/me/Dropbox/public_html" #...etc... # old line: <Directory "/opt/lampp/htdocs"> <Directory "/home/me/Dropbox/Work/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # etc... I've did this as said in this article: Using Ubuntu One to synchronise htdocs? Then I've restarted Apache and I've got a permission error 403 on every page I've called with the web browser. So I've changed folder and files permission to 755. I've did this as said in this article: What file permissions should I set on web root? The problem still remains the same, I have the 403 error on every page I try to reach with the web browser. I have the same problem on a Mac using XAMPP. So everythig works fine if the folder remains the original /opt/lampp/htocs. How can I change it correctly?

    Read the article

  • What could have caused a large traffic drop from Google in early May?

    - by Scott Schluer
    I have a website (www.equispot.com) that has been indexed for almost 2 years in Google. I managed to get myself on the first page (average position 6-8) on Google for my target keyword of "horses for sale" and held there pretty solidly for months. Suddenly, with no changes to the site, traffic from Google dropped like a rock in early May. I slowly fell in position until now I'm sitting at the bottom of page 4. I have never hired an SEO firm, have not used any "black hat" techniques that Google would have penalized me for in their May update, etc. I'm not familiar enough with SEO to know how to look at link profiles, etc. to tell if there's something wrong. I've run my site through a DNS checker and it came back with no errors. Google Webmaster Tools shows no messages or notices of any kind, just a drop in traffic. GWT also shows only 2 server errors and 1 404. Is there anyone who can tell me by quickly checking my domain if there's an obvious reason that my traffic would have fallen so far, something that I can fix?

    Read the article

  • Redirecting 2 or more domains to same hosting server

    - by mtk
    I have domains A.com, A.co.in and A.in Purchased from site X. I have a hosting space/account purchased from site Y, which has provided me with 2 DNS entries that is to be replaced in the account at the site from where I purchase the domains. I have successfully changed the DNS entries of A.com to these 2 DNS entries and I am able to see my index.html page when I hit A.com. Problem On similar lines, I have changed the DNS entries to the same entries for A.co.in and A.in, but on hitting those sites in browser gives me no response and browser specific page of 'Site not found' is been seen. Please let me know, how to set this, so that when I hit any of the domain, the web-site is rendered from the hosting server? What am I doing wrong here? Note It has been more than 3 days after changing the DNS entries, so I don't think so this is a problem of DNS propagation, which I heard from some people. Please provide some detail explanation, as I am very very new to this. This is my first hosting ;) -Thanks

    Read the article

  • Calling a model from a controller from the 404 route [migrated]

    - by IrishRob
    Got a problem here where I can’t seem to load a method from a model after the page has been redirected after encountering a 404. Model name: Category_Model Method name: get_category_menu() In my routes, I’ve updated the 404 over-ride to: $route[‘404_override’] = ‘whoops’; I’ve also got my controller Whoops that reads… <?php class Whoops extends CI_Controller { function index() { $this->load->model('Category_Model'); $data['Categories'] = $this->Category_Model->get_category_menu(); $data['main_content'] = $this->load->view('messages/whoops', null, true); $this->load->view('includes/template', $data); } } So when I navigate to a page that doesn’t exist, I get the following error… Message: Undefined property: Whoops::$Category_Model Filename: controllers/whoops.php I’ve hard coded the loading of the model into the controller here, even though I have it in my autoload, but no luck. Everything else with the site so far works, just this 404 problem. Any pointers would be great, kinda new to CI so go easy on me. Cheers.

    Read the article

  • Hobbyist transitioning to earn money on paid work?

    - by Chelonian
    I got into hobbyist Python programming some years ago on a whim, having never programmed before other than BASIC way back when, and little by little have cobbled together a, in my opinion, nice little desktop application that I might try to get out there in some fashion someday. It's roughly 15,000 logical lines of code, and includes use of Python, wxPython, SQLite, and a number of other libraries, works on Win and Linux (maybe Mac, untested) and I've gotten some good feedback about the application's virtues from non-programmer friends. I've also done a small application for data collection for animal behavior experiments, and an ad hoc tool to help generate a web page...and I've authored some tutorials. I consider my Python skills to be appreciably limited, my SQL skills to be very limited, but I'm not totally out to sea, either (e.g. I did FizzBuzz in a few minutes, did a "Monty Hall Dilemma" simulator in some minutes, etc.). I also put a strong premium on quality user experience; that is, the look and feel matters much to me and the software looks quite good, I feel. I know no other programming languages yet. I also know the basics of HTML/CSS (not considering them programming languages) and have created an artist's web page (that was described by a friend as "incredibly slick"...it's really not, though), and have a scientific background. I'm curious: Aside from directly selling my software, what's roughly possible--if anything--in terms of earning either side money on gigs, or actually getting hired at some level in the software industry, for someone with this general skill set?

    Read the article

  • Pull Request Conversations, Inline Diff Enhancements

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Conversations Previously, the only way for project members and users who submitted pull requests to converse was via e-mail. This complicated the review process and made conversations isolated and difficult to track. For this release, we’ve added functionality that enables you to have those same conversations within the pull request page. When you view a pull request, you’ll now see “Comments” and “Changes” tabs, with current comments displayed. Inline Diff Enhancements We tweaked the inline diff experience to make it easier to traverse diff blocks. When you open up the inline diff experience, you’ll now see up and down arrows. To move between the diff blocks, you can use those arrows or utilize the available keyboard shortcuts. Lastly, we have also brought the inline diff experience to the source control changes page for project and fork changesets. You can see both enhancements live by viewing the associated pull request or changeset changes on WikiPlex. The CodePlex team values your feedback. We are frequently monitoring Twitter, our Discussions, and Issue Tracker. If you have not visited the Issue Tracker recently, please take a few minutes to suggest or vote on a feature you would like to see implemented.

    Read the article

  • jquery ondrag map load only what has not been viewed

    - by David
    When a person mouse down, moves the mouse, and mouses up the system gets the different in the mouse down coord and the mouse up coords and loads in the new map items. However, the problem is it loads them every time so I want a way to track what has been loaded without too much work on checking or storing checks. Most promising. Came up with while typing this. Section the screen into 250x250 sectors and check if that sector has been loaded. Keep track of each corner of the screen and see if there is an area of those that have not loaded. Keep a record of the corners of the screen. When mouse up coords are greater then load the different. Problem is if they are at coord 10,000 then it will load from -10k to 10k positive and that is a lot of items to load. Check every item on the page to see if it is loaded. If I do this I might as well reload the whole page. If anyone has some suggestions; feel free to pass them on.

    Read the article

  • How to set permalink of your blog post according to date and title of post?

    - by Amit
    I am having this website http://www.finalyearondesk.com . My blogs post link are set like this.. http://www.finalyearondesk.com/index.php?id=28 . I want it to set like this ... finalyearondesk.com/2011/09/22/how-to-recover-ubuntu-after-it-is-crashed/ . I am using the following function to get these posts... function get_content($id = '') { if($id != ""): $id = mysql_real_escape_string($id); $sql = "SELECT * from blog WHERE id = '$id'"; $return = '<p><a href="http://www.finalyearondesk.com/">Go back to Home page</a></p>'; echo $return; else: $sql = "select * from blog ORDER BY id DESC"; endif; $res = mysql_query($sql) or die(mysql_error()); if(mysql_num_rows($res) != 0): while($row = mysql_fetch_assoc($res)) { echo '<h1><a href="index.php?id=' . $row['id'] . '">' . $row['title'] . '</a></h1>'; echo '<p>' . "By: " . '<font color="orange">' . $row['author'] . '</font>' . ", Posted on: " . $row['date'] . '<p>'; echo '<p>' . $row['body'] . '</p><br />'; } else: echo '<p>We are really very sorry, this page does not exist!</p>'; endif; } Any suggestions how to do this? And can we do this by using .htaccess?

    Read the article

  • Doesn't work Nginx + SSI [migrated]

    - by boopidoopi
    I have some problems. Nginx doesn't work with SSI. Nginx listens 80 port (frontend), apache2 listens 81 port (backend). That is my nginx configurations: server { listen 80; server_name test.dev www.test.dev; error_log /var/log/nginx/error.log debug; log_subrequest on; location / { ssi on; proxy_pass http://localhost:81; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 15m; client_body_buffer_size 128k; } } SSI include in test.dev index.php: <!--# include virtual="http:test.dev/test.html" -- When I open test.dev/index.php I see clean page. In page source: <!--# include virtual="http:test.dev/test.html" -- So how to enable SSI in nginx? Can you help me?

    Read the article

  • JavaScript and the User Experience

    5 sites I commonly vist at home: Google.com Gmail.com Linkedin.com Capella.edu Codeplex.com All of the top 5 sites I visit at home use JavaScript and is applied in various ways for various reasons. Gmail and Google make use of Ajax to retrieve information without the user having to call another page. In addition, all 5 of the websites use JavaScript to enhance a user's experience. Examples of this can be found in content rotation on Capella's main site and the displaying and hiding of specific content sections from within our course room. Codeplex uses Ajax and JavaScript to show dynamic content on its homepage and allow users to page through the data. I think there use of JavaScript is well placed and enhances the viewing experience of the user because it reduces the amount of interaction a user has to perform for them to obtain information they are looking to see. I have used JavaScript in various ways. One of the most memorable ways was to enable an HTML table to be able to have its rows paged and sorted based on the values in each table row.  

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Have you Been Missing the 'About This Record' Functionality on the Customer Form...?

    - by MargaretW
    Do you have fond memories of the 'Help -> About This Record'  functionality that used to be available in the old Customer form - when it was a form, and not a java html screen?  Back in Release 11i, we had the ability to identify when the customer record had last been updated and by whom.  When some forms were replaced by Java HTML screens, you could identify some of this information via the 'About this Page' hyperlink at the bottom left hand corner of the HTML page.  You could enable this by enabling the FND: Diagnostics profile option, but many customers found this had an adverse effect on performance and additionally was not user-friendly.   Our customers tell us that this feature was widely used to identify owner/update information in many business processes, including auditing, customer entry/update, research and testing.  There have been various efforts to revert this feature by customising java pages, but this was not fully successful in some cases.  Oracle Support is happy to announce that this functionality has now been included in the Customer screens in Release 12.2 onwards.   You will be able to query the record history at customer level, at site level, at site address levels and for all tabs relating to the customer. Simply click on the 'Record History' icon, available in the Record History column on a summary screen, or via the same icon on the individual detail screen to display the following information: Last Updated Date: Last Updated By Creation Date Created By Last Update Login

    Read the article

  • JDeveloper News &ndash; Did You Know

    - by shay.shmeltzer
    There have been a few issues lately with access to blogs.oracle.com to write new messages – and as a result I’m a little behind on reporting the latest and greatest in JDeveloper. (I’m also unable to approve comments :-( ) But just in case you missed it here are a few note worthy things you should know: ADF Mobile Client went production – this is a solution that let you use ADF to develop on-device applications for mobile devices. You develop once and run on various devices (right now Blackberry and Windows Mobile with other platforms coming soon). For more information check out the ADF Mobile page on OTN. ADF Developer Certification goes production – you can now take the official test and get an official certification that you can include in your resume.Should be a must for any consultant looking to get ADF related gigs. Learn more about the ADF Certification Exam. ADF Vision Stencils Released – If you are looking for a quick way to draw prototypes of ADF screens you probably can’t get any faster and more accurate results than this. This is a set of Vision components that you can drag over and design a page visually. You can also set properties for components and do other advance things. You do need a license for Visio to use this, but the ADF stencils are free. We’ve been using these internally at Oracle for some time now and we thought the ADF community would enjoy these too. Download the ADF Visio Stencils here (and watch a youtube demo of how they work).

    Read the article

  • Should I make up my own HTTP status codes? (a la Twitter 420: Enhance Your Calm)

    - by Max Bucknell
    I'm currently implementing an HTTP API, my first ever. I've been spending a lot of time looking at the Wikipedia page for HTTP status codes, because I'm determined to implement the right codes for the right situations. Listed on that page is a code with number 420, which is a custom code that Twitter used to use for rate limiting. There is already a code for rate limiting, though. It's 429. This led me to wonder why they would set a custom one, when there is already a use case. Is that just being cute? And if so, then which circumstances would make it acceptable to return a different status code, and what, if any problems may clients have with it? I read somewhere that Mozilla doesn't implement the joke 418: I’m a teapot response, which makes me think that clients choose which status codes they implement. If that's true, then I can imagine Twitter's funny little enhance your calm code being problematic. Unless I'm mistaken, and we can appropriate any code number to mean whatever we like, and that only convention dictates that 404 means not found, and 429 means take it easy.

    Read the article

  • IIS 7 SSO stops working during high CPU load? [migrated]

    - by DanB
    On our IIS7 site (Windows 2008 Server), we have set up single sign-on (SSO). It seems to work fine most of the time, but when the CPU load becomes high, SSO authentication completely stops working. I did some research and tried this suggestion to increase the max number of worker processes in the default app pool, but the increase did not help. Some details: The site is a WordPress blog. The server has plenty of RAM (2 GB) and free disk space. SSO is achieved by putting a copy of the WordPress login page (wp-login.php) into a subfolder below the root that has anonymous authentication disabled, and then redirecting the browser to it. This was the recommendation of Microsoft given to our consultants. To increase CPU load for testing, I have three scripts hit the home page simultaneously, over and over. This drives CPU to 100%. When these scripts are running, SSO authentication simply doesn't happen. As soon as I stop the scripts, SSO works again. (I should mention that the SSO problem also happens when many users visit the site at once....) The WordPress database process (mysqld) is not stressed at all by the scripts. I would be happy to provide further diagnostics. Any help appreciated!

    Read the article

  • Spring Cleaning

    - by Tim Dexter
    I recently got a shiny new laptop; moving my shiz from old to new, was not the nightmare it used to be. I have gotten into the habit of using a second hard drive in the media bay where the CDROM normally sits. That drive contains my life's work with BIP. I can pull it out and plug it into another machine very easily. I have been sorting through some old directories and files, archiving some, sharing others with colleagues. For instance, a little dated but if you were looking for a list of Publisher reports available in EBS R12.1, here it is. Im trying to track down a more recent R12 instance and will re-post the document. I also found another gem; its a little out there in terms of usefulness but Im sharing it none the less. You can embed, locally or remotely reference SVG graphics (in XML format) and bring the images into the BIP outputs. Template and sample data here. A nice set of templates showing page number control and page suppression - they will need some explanation, so I'll save them for another post. The list goes on but I'll save them for later. Back to the clean up!

    Read the article

  • Advanced Experiments with JavaScript, CSS, HTML, JavaFX, and Java

    - by Geertjan
    Once you're embedding JavaScript, CSS, and HTML into your Java desktop application, via the JavaFX browser, a whole range of new possibilities open up to you. For example, here's an impressive page on-line, notice that you can drag items and drop them in new places: http://nettuts.s3.amazonaws.com/127_iNETTUTS/demo/index.html The source code of the above is provided too, so you can drop the various files directly into your NetBeans module and use the JavaFX WebEngine to load the HTML page into the JavaFX browser. Once the JavaFX browser is in a NetBeans TopComponent, you'll have the start of an off-line news composer, something like this: WebView view = new WebView(); view.setMinSize(widthDouble, heightDouble); view.setPrefSize(widthDouble, heightDouble); webengine = view.getEngine(); URL url = getClass().getResource("index.html"); webengine.load(url.toExternalForm()); webengine.getLoadWorker().stateProperty().addListener( new ChangeListener() { @Override public void changed(ObservableValue ov, State oldState, State newState) { if (newState == State.SUCCEEDED) { Document document = (Document) webengine.executeScript("document"); NodeList list = document.getElementById("columns").getChildNodes(); for (int i = 0; i < list.getLength(); i++) { EventTarget et = (EventTarget) list.item(i); et.addEventListener("click", new EventListener() { @Override public void handleEvent(Event evt) { instanceContent.add(new Date()); } }, true); } } } }); The above is the code showing how, whenever a news item is clicked, the current date can be published into the Lookup. As you can see, I have a viewer component listening to the Lookup for dates.

    Read the article

  • Book Giveaway: We Have 10 Free Copies of the 4-Hour Chef (The Simple Path to Cooking Like a Pro, Learning Anything, and Living the Good Life)

    - by The Geek
    The 4-Hour Chef isn’t just a cookbook. It’s a choose-your-own-adventure guide to the world of rapid learning from the best-selling author of the 4-Hour Workweek, and we’ve got 10 free copies for How-To Geek readers. Want more information? Here’s the description of the book, from the Amazon page. The 4-Hour Chef is a five-stop journey through the art and science of learning: 1. META-LEARNING. Before you learn to cook, you must learn to learn. META charts the path to doubling your learning potential. 2. THE DOMESTIC. DOM is where you learn the building blocks of cooking. These are the ABCs (techniques) that can take you from Dr, Seuss to Shakespeare. 3. THE WILD. Becoming a master student requires self-sufficiency in all things. WILD teaches you to hunt, forage, and survive. 4. THE SCIENTIST. SCI is the mad scientist and modernist painter wrapped into one. This is where you rediscover whimsy and wonder. 5. THE PROFESSIONAL. Swaraj, a term usually associated with Mahatma Gandhi, can be translated as “self-rule.” In PRO, we’ll look at how the best in the world become the best in the world, and how you can chart your own path far beyond this book. Still not sold? There’s more information and pictures over on the Amazon page for the book. The 4-Hour Chef: The Simple Path to Cooking Like a Pro, Learning Anything, and Living the Good Life Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary

    Read the article

  • How should I structure a site with content dependent on visitor type (not user)?

    - by Pedr
    I have a website that displays different content depending on two selections made by a visitor: Whether they are a teacher or student, and their learning level (from 4 options). Everything is public and they don't need to authenticate to access the content. Depending on their selection, different content is displayed across the whole site, other than a contact and about page. The tone of the language changes depending on whether the visitor is a student or teacher and the materials available on each page also change depending on the learning level, however in all cases, the structure of the site is identical. Currently I'm using a cookie to store the visitor's selections and render different content appropriately, so I have a single set of URLs which display different content depending on the cookie, with one of the permutations as default. I appreciate this is far from ideal, but what is the better option? Would I be better using a distinguishing segment for each selection, for example: http://example.com/teacher/lv3/resources/activities http://example.com/teacher/lv4/resources/activities http://example.com/student/lv4/resources/activities etc. What is the most sensible way to handle this situation?

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • Issue updating domain name servers from BlueHost to AWS

    - by cowls
    I am trying to migrate my site hosting from bluehost to AWS cloud based service. I have the site up and running on AWS with an elastic IP configured, it loads fine when I specify the IP address in the browser. I have gone into Route 53 on the AWS console and created a "hosted zone" for the domain. I then created a new record set of type "A" using the IP address as the value. I have a domain name registered with bluehost. Ive logged into the bluehost account and updated the domain name servers to point to those specified in Route 53 in the AWS console. When I hit the IP address directly the site loads, however it doesn't load when using the domain name (I get a google chrome oops error page saying page is not found) I've tried using this site: http://dns.squish.net/ to debug but it seems to be giving me the correct results. fizaclegems.com 300 IN A 107.20.209.78 Where 107.20.209.78 matches the elastic IP configured in the AWS console. This is the result it gives for all 4 name servers. Am I missing a step here? Does anyone know what else I should be doing or looking for?

    Read the article

  • Slightly off topic - How to Fix Sky Go Error [t6013-c1501] (and [t6000-c1501])

    - by bconlon
    Sky doesn't seem to understand what their own errors mean, so I cobbled together an understanding from some other posts and managed to get it working.When you see the error [t6013-c1501] instead of your TV programme in Sky Go, it seems to mean:'You registered a device, but then changed the hardware, so now I'm confused!'In other words, the Digital rights management (DRM) used between Sky Go and Silverlight stored an old fingerprint of your PC, but rather than recognising this and allowing you to remove the device, it just disappears from the 'Manage Devices' page.DISCLAIMER: Perform the following steps at your own risk. It worked for me, but I didn't care if it broke stuff. If you care....don't do it!So, to fix this I did the following:1. Login to Sky Go and click 'Watch live TV' from the home page. It will attempt to show Sky News and fail with the error [t6013-c1501].2. Right click on the error and you should see the Menu option 'Silverlight'. Select this and a dialog should appear. Click the 'Application Storage' tab and delete any entry that relates to sky go. Clcik OK to close the dialog.3. Open explorer and navigate to the folder C:\ProgramData\Microsoft\PlayReady4. Rename the file mspr.hds to mspr.hds.OLD5. Go back to the browser and click F5. You may need to logout/login (not sure).Note: Don't rename/delete the folder C:\ProgramData\Microsoft\PlayReady or you will get the error [t6000-c1501]. The folder must exist in order for the new file to be created by Silverlight. Techie talk:So whoever wrote the code to create a new mspr.hds file didn't write code to check the folder existed causing what I assume is a generic error t6000, probably something like:catch (Exception ex) { WriteToLog("Oops, something broke!"); }#

    Read the article

  • Help me with this logic (newbie) [migrated]

    - by Surendra
    I need to generate a half pyramid number series with the entered starting number and the number of lines in a html page using Javascript and show the result in html page . I have done the Java scripting and stuff . What I don't get is the logic to it. Take a look at this you may get an idea what I'm talking about: Here is my function in Javascript that will be triggered on a button click function doFunction(){ var enteredNumber=document.getElementById("start"); var lines=document.getElementById("lines"); var result; for(i=0;i<=lines.value;i++) { for(j=enteredNumber.value;j<=i;j++) { document.write(j + "&nbsp;" + "&nbsp;"); } document.write("<br />"); } } Help me with the logic to print following order: 1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 There is a condition. I will specify $start and $lines. If $start = 5 and $lines = 3 then output should be like: 5 5 6 5 6 7 I have had used the for loop , but that doesn't work if I give my own start number that is higher than the number of lines. I actually need it done with Javascript, I have had done the necessary but I'm confused with the logic to generate such series (with the user given values) I had actually used two for loops to generate the regular number series like below 1 1 2 1 2 3 and so on.

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

< Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >