Search Results

Search found 11086 results on 444 pages for 'asynchronous pages'.

Page 289/444 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Redirection & SEO related stuff while moving to a new blog

    - by Karshim Kanwar
    I have a WordPress blog and recently I have setup a new blog lets call the old blog as blog old and new blog as blog new. What I did is moved the content, photos, pictures and all 250 posts from blog old to blog new. Both the blog name are changed as they are pointing to different domain names! I read helpful things in this site itself at here. I will no longer use blog old, moreover I am concerned about the SEO of the blog new. The blog new is fairly new (just 24 hours and no pages have been indexed in Google). I have done the following stuff: Deleted all the post share at Facebook fan Page, Twitter profile, Google+ page and Finally deleted the fan page/Twitter, Google+ page. Edited the link backs of old blog in the blog new. The question I have is: How do I prevent duplicate content issues? Do I go straightaway and delete all the posts in blog old? Should I start sharing the blog posts in blog new? Should I submit the new site to Webmaster Tools or wait for few weeks? Every comment here is appreciated! What issues can I face relating to SEO?

    Read the article

  • By what features and qualities are "free" and "premium" themes differentiated

    - by Sinthia V
    I have a lot of time invested in creating Wordpress templates. I want to release combinations of these templates along with different styles and Fancy Front pages as "Premium Wordpress Themes". What I need to know is what does "Premium" mean? What do people expect of a GPL theme vs. a Premium theme? Are there features that are considered required to be premium? Are there features that are in demand but considered "exceptional" i.e. not part of every premium theme? How can I tell the difference? I have heard tounge-in-cheek answers that say that any theme that makes money is premium, but I mean to ask about what gives an outstanding theme it's quality. Why is it worth more? I am technically able to do many things, but as a lone developer with a family to feed, I can't afford to spend time on features that no one cares about. I have to try to isolate the things that people want. This is serious food and rent to me. How can I get this kind of info so I can make my project successful?

    Read the article

  • Lots of Internet browsing issues, all browsers

    - by dario_ramos
    Before the upgrade, everything was working fine. Now, however, I can connect to the Internet but a lot of stuff fails, and the weirdest thing is that it happens with Firefox, Chromium and Opera. Some of the things that fail: I can't log in to Stack Overflow, after entering user/pass it loads for a long time on Firefox and throws Error 408 (browser request timed out) on Chromium and Opera I can't log in to Hotmail, similar symptoms I can login to Facebook, but when I try to write a comment, or just post something in my wall, it stays loading for a long time, and then fails The first two issues seem to be related to secure pages, and the second one is another issue altogether, I believe. However, they all happen with all browsers, which is really weird. Talking about weird: I connect using a Huawei SmartAX MT 810 USB modem, which cost me blood and tears to get it working under Ubuntu. I ordered an ethernet modem/router with my ISP, and I'm still waiting, but this issue intrigues me anyway. Has anyone experienced this kind of problems? I Googled around, but couldn't find a similar case.

    Read the article

  • I am confused between PHP and ASP.NET to choose as a career in Indian software development context

    - by Confused_Guy
    I need your help(specially from the software professionals of India). I have completed MCA in 2009. After that instead of joining software company I did a Teaching job nearby by home. In the mean time I prepared myself for public sector jobs(bank). I continued my job for 1 year more and left it in 2010. Now in 2012 ,I feel that I should have done the software jobs,so that I could earn my bread and butter and in the mean time I could have prepared for the job.Because,according to my qualification it will give me the best salary. Now I want to go back in software industries. Now all of them are asking for experiences.And I don't have any.....So which language should I learn? And what should I do,because I have two year gap. Some of my friends suggested me to go with PHP as its easier and quicker to get job in India. But Here the PHP guys are getting less salary as compared to ASP.NET. I am planning to begin with PHP and but is it possible to switch to ASP.NET after two years experience. JAVA: I know upto servlet & JSP. Which is nothing in current market. ASP.NET: I know the basics of asp.net upto database connection ie(Gridview). PHP: Only the basics. So what should I do now. Which is most demanding. Does PHP is good, I feel its more like JSP pages. Please guide me, All your suggestions are needed for me.

    Read the article

  • Focusing and Selecting the Text in ASP.NET TextBox Controls

    When a browser displays the HTML sent from a web server it parses the received markup into a Document Object Model, or DOM, which models the markup as a hierarchical structure. Each element in the markup - the <form> element, <div> elements, <p> elements, <input> elements, and so on - are represented as a node in the DOM and can be programmatically accessed from client-side script. What's more, the nodes that make up the DOM have functions that can be called to perform certain behaviors; what functions are available depend on what type of element the node represents. One function common to most all node types is focus, which gives keyboard focus to the corresponding element. The focus function is commonly used in data entry forms, search pages, and login screens to put the user's keyboard cursor in a particular textbox when the web page loads so that the user can start typing in his search query or username without having to first click the textbox with his mouse. Another useful function is select, which is available for <input> and <textarea> elements and selects the contents of the textbox. This article shows how to call an HTML element's focus and select functions. We'll look at calling these functions directly from client-side script as well as how to call these functions from server-side code. Read on to learn more! Read More >

    Read the article

  • Lots of goodies

    - by wcoekaer
    We just issued a press release with a number of very good updates for everyone There are a few things of importance : 1) As of right now, Oracle Linux 6 with the Unbreakable Kernel is certified with a number of Oracle products such as Oracle Database 11gR2 and Oracle Fusion Middleware. The certification pages in the Oracle Support portal will be updated with the latest certification status for the various products. As always we have gone through a long period of very comprehensive testing and validation to ensure that the whole stack works really well together, with very large database workloads, middleware application workloads etc. 2) Standard certification efforts for Oracle Linux 6 with the Red Hat Compatible Kernel are in progress and we expect that to be completed in the next few months. Because of the compatibility between OL6 and RHEL6 we can then also state certification for RHEL6. 3) Oracle Linux binaries (and of course source code) have been free for download -and- use (including production, not just trial periods) since day one. You can freely redistribute the binaries, unlike many other Linux vendors where you need to pay a support subscription to even get access to the binaries. We offered both the base distribution release DVDs (OL4, OL5, OL6) and the update releases, such as 5.1, 5.2 etc. this way. Today, in this announcement, we also started to make available the bugfix and security updates released in between these update releases. So the errata streams (both binary and source code) for OL4, 5 and 6 are now free for download and use from http://public-yum.oracle.com. This includes uek and uek2. The nice thing is, if you want a complete up to date system without support, use this, if you then need support, get a support subscription. Simple, convenient, effective. We have great SLA's in producing our update streams, consistency in release timing and testing of all the components. Have at it!

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • Directing Multiple ccTLD's to 1 gTLD with a country specific subdirectory?

    - by Pascal Van Opzeeland
    We have multiple ccTLDomains and are thinking about how to best combine these into one. We want to do this to focus our link building efforts. We are running a website through which we offer a software-as-a-service. Therefore we could potentially sell to any country in the world. However, Germany is our most important market. We currently have a .com, .de, .nl. and .pl domain. All these domains have a high amount of unique content pages. What we are planning is to change everything to .com with language-based subdirectories, so .com/en/, .com/de/, etc. I have two questions concerning this issue: 1) How much of an advantage does a ccTLD have over a gTLD with country specific subdirectories in search rankings? So let’s say .de versus .com/de/? 2) How could we best redirect the visitors of our old ccTLD’s to our gTLD’s subdirectories? We would like to loose as few search engine rankings as possible. Thank you for your help.

    Read the article

  • tips, guidelines, points to remember for rendering professional code?

    - by ronnieaka
    I'm talking about giving clients professional looking code. The whole nine yards, everything you hardcore professional highly experienced programmers here probably do when coding freelance or for the company you work in. I'm fresh out of college and I'm going into freelance. I just want to be sure that my first few projects leave a good after-taste of professionalism imprinted on the clients' minds. When I Googled what i'm asking here, I was given pages that showed various websites and tools that let you make flashy websites and templates etc. The $N package and such stuff. I can't recall the word experts use for it. Standard, framework [i know that's not it]. English isn't my first language so I'm sorry I don't really don't know the exact phrase for it. That abstract way of writing code so that you don't come across as a sloppy programmer. That above mentioned way for programming websites and desktop software [in python/C/C++/Java]. EDIT: i can work on the accruing vast knowledge and know-how and logic building etc. what i'm asking for is the programming standard/guidelines you guys follow so that the client on seeing code feels that its a professional solution. Like comment blocks, a particular indentation style something like that. Is there any book on it or specific list of points for enterprise type coding by them? Especially here as in my case, for building websites [php for now..], and desktop software [c/c++/java/python]

    Read the article

  • Making WatiN Wait for JQuery document.Ready() Functions to Complete

    - by Steve Wilkes
    WatiN's DomContainer.WaitForComplete() method pauses test execution until the DOM has finished loading, but if your page has functions registered with JQuery's ready() function, you'll probably want to wait for those to finish executing before testing it. Here's a WatiN extension method which pauses test execution until that happens. JQuery (as far as I can see) doesn't provide an event or other way of being notified of when it's finished running your ready() functions, so you have to get around it another way. Luckily, because ready() executes the functions it's given in the order they're registered, you can simply register another one to add a 'marker' div to the page, and tell WatiN to wait for that div to exist. Here's the code; I added the extension method to Browser rather than DomContainer (Browser derives from DomContainer) because it's the sort of thing you only execute once for each of the pages your test loads, so Browser seemed like a good place to put it. public static void WaitForJQueryDocumentReadyFunctionsToComplete(this Browser browser) { // Don't try this is JQuery isn't defined on the page: if (bool.Parse(browser.Eval("typeof $ == 'function'"))) { const string jqueryCompleteId = "jquery-document-ready-functions-complete"; // Register a ready() function which adds a marker div to the body: browser.Eval( @"$(document).ready(function() { " + "$('body').append('<div id=""" + jqueryCompleteId + @""" />'); " + "});"); // Wait for the marker div to exist or make the test fail: browser.Div(Find.ById(jqueryCompleteId)) .WaitUntilExistsOrFail(10, "JQuery document ready functions did not complete."); } } The code uses the Eval() method to send JavaScript to the browser to be executed; first to check that JQuery actually exists on the page, then to add the new ready() method. WaitUntilExistsOrFail() is another WatiN extension method I've written (I've ended up writing really quite a lot of them) which waits for the element on which it is invoked to exist, and uses Assert.Fail() to fail the test with the given message if it doesn't exist within the specified number of seconds. Here it is: public static void WaitUntilExistsOrFail(this Element element, int timeoutInSeconds, string failureMessage) { try { element.WaitUntilExists(timeoutInSeconds); } catch (WatinTimeoutException) { Assert.Fail(failureMessage); } }

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • When trying to install Wine on 12.10, 'Sudo' command will not let me type in a password.

    - by Nocturnus
    As the title explains, I have been attempting to install Wine on my laptop which is running 12.10. When I access the command terminal and entered "sudo add-apt-repository ppa:ubuntu-wine/ppa" I was of course met by a password block, when I attempted to enter my password, it flat out wouldn't let me type anything, the only key that got a response from the terminal was "enter" which was met by "incorrect password". To bypass this issue I backed out and used the 'Gksudo' command, this new dialogue box seemed to give me access to sudo commands. I then entered "sudo apt-get update" and "sudo apt-get install wine1.5". Up until the installation everything went fine, but after entering the final command (still using gksudo) The terminal read "the following packages have unmet dependencies" and proceeded to list a bunch of "recommends" So my guess is that Wine hasn't been updated to run on 12.10... Is this true, and is there any other way to open .exe's? Also what was with that funky password misshap? I'm totally new to Ubuntu so I've just been using support pages and tutorials, sorry if I'm a bit naive in these matters...

    Read the article

  • Which programming language should I choose I want to build this website ...? [closed]

    - by Goma
    Assuming that I will start with just phot sharing website. Every user can add comments to any photo. After that the site will contain news (general news), the admin can add any news and the moderators as well while the users can also add comments on this news. The website will aslo provide photos uploader, so every user will have up to 20 MB ti upload any photos they want. Other users can see these photos or can not depending on the option that the main user chose(if he wants to publish his photos or not). The site should have a small type of forum which provide the ability for admin to ad categories and for user to add topics and replies for each topic in these categoris. These are the things that I can think of now, but the website will add other features as well and services later on. Can you tell me now which programming language can help me to do all that? I need a programming language that provdies the follwing: 1- speed load for pages of the site. 2- easy to add more functions quickly and easy to edit code for any reason. 3- Secure 4- fast in displaying infromation from database.

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

  • Is "send us a page with code" a typical interview requirement?

    - by acm
    Recently I was asked to show "a page with code" for a job interview. Being mainly a back-end programmer, and that's the position I applied for, I first said to the person I was talking to exactly that: PHP is executed at the server and therefore not visible by just giving a "page". However, following their desire, I sent links to the pages I've worked on before. Obviously they couldn't see anything except for the HTML, CSS, JS... They said it was not enough, they could not see the PHP. Understanding that they probably just wanted to know my skills and/or interest I sent them my Stack Overflow profile. Among all my questions and answers, most of them with code, certainly the PHP is there. But it seems this is not what they wanted. Well, I don't have any code put together that I can simply publish for someone to see. And I would never do it for the code I have deployed, obviously. So my question is/are: What does "send us a page with code" mean? What should I send? Is this a typical interview requirement?

    Read the article

  • Subdomain takes the position of main site in Google search result

    - by user3578586
    We have one domain and one sub-domain. Until last week both of them appear in first page of Google search for very important keyword. Unfortunately Google dropped our main domain from search results. our main site has been in first page for 5 years! About one year ago we build this sub-domain. It simply has been redirected to one of pages of main domain. For solving problem we upload a independent site for sub-domain because we guessed that Google think this is our main page of our site. But problem did not solved. What should we do? our main site offer main services and we we want that will be on first page. Shout down sub-domain? Redirect to main site? Put the link of our main site in sub-domain? (About one year ago we put link of this sub-domain to our main site. Google indexed it and continuously bring that to top.) changing in robots.txt ....

    Read the article

  • Release Notes for 6/14/2012

    Here are the notes for this week’s release: Diffs in Pull Requests and Commits We altered the way we display diffs across commits and pull requests to maximize the amount of vertical real estate devoted to the diff. Before, the viewport for diffs was always snapped to the height of the browser, which meant that on lower resolutions, the amount of space for viewing diffs could become very tiny. Now, the majority of the browser vertical space is devoted to viewing the diffs. Let us know what you think! Bug Fixes Fixed an issue where returning to the list of files changed from a diff would sometimes not show the list of files. Fixed the dialogs for approving and denying requests to join projects. Fixed various issues around validation of project details when publishing a project. Fixed an issue that caused the formatting of our tabs in pull requests to not display properly. Fixed an issue where users browsing Unicode files in a Git project would see error pages. Fixed various issues where the option to subscribe to notifications would not appear properly. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Why not XHTML5?

    - by eegg
    So, HTML5 is the Big Step Forward, I'm told. The last step forward we took that I'm aware of was the introduction of XHTML. The advantages were obvious: simplicity, strictness, the ability to use standard XML parsers and generators to work with web pages, and so on. How strange and frustrating, then, that HTML5 rolls all that back: once again we're working with a non-standard syntax; once again, we have to deal with historical baggage and parsing complexity; once again we can't use our standard XML libraries, parsers, generators, or transformers; and all the advantages introduced by XML (extensibility, namespaces, standardization, and so on), that the W3C spent a decade pushing for good reasons, are lost. Fine, we have XHTML5, but it seems like it has not gained popularity like the HTML5 encoding has. See this SO question, for example. Even the HTML5 specification says that HTML5, not XHTML5, "is the format suggested for most authors." Do I have my facts wrong? Otherwise, why am I the only one that feels this way? Why are people choosing HTML5 over XHTML5?

    Read the article

  • Unity no longer loads in 13.04 for main user

    - by user152973
    When Ubuntu starts up, Unity fails to load (I can only see my desktop with no unity sidebar and no system bar in the top right). I tried the advice of Unity does not start in Ubuntu 13.04 which recommended the following commands: dconf reset -f /org/compiz/ unity --reset-icons &disown I ran the commands without errors an restarted the computer, but the problem persists. I am currently running Gnome. I have looked at other pages from the Google search "ubuntu unity failed to load 13.04", but the advice was similar to above and seems to be concerned with a system upgrade in April 18, 2013. I suspect my issue is something far more recent. Please give me advice on how to restore Unity on my account or at least figure out what the problem is. Thank you. Some information that might be relevant: -Unity has worked fine on 13.04 for the 6 months that I've had it until today. (November 10, 2013) -I have set up the update tool to automatically update when available. It is very possible that the system applied some updates without my knowledge. -Interestingly, Unity works fine on the Guest account. -I have made it so the system automatically logs me in at start-up. -This is a personal laptop. No one else has access to it. -I was not doing anything with the system settings or the terminal and have not installed any new software for the past 3 days. -I am running the System76 native Linux laptop Ultra Lemur. I did not contact their support yet because it seemed unlikely that this is a System76-specific error.

    Read the article

  • Multiple domains for different products?

    - by alexandertr
    I have a website with software applications. Is it good for SEO to choose one keyword rich domain name for each of our software products or should we stick to a single domain? From a user's perspective I think it would be easier to remember a domain that is keyword rich as the user will instantly know what this product is for. But I have read articles that the latest trend in SEO is to stick to one domain for all of your products and invest on this single domain website. Is that true? What do you advise? Should I register a separate domain for each of our products or should I use only one single domain? Should I do a 301 redirect with a .htaccess to a single domain? And what about the sitemaps? Should I register all sites in Google Webmaster Tools and post a separate sitemap for each one of them? should my main site sitemap include all pages or should separate domains have their own sitemaps?

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • IE 9:Release

    - by xamlnotes
    Yippie: IE 9s coming out March 14!: http://windowsteamblog.com/ie/b/ie/archive/2011/03/09/a-more-beautiful-web-launches-on-march-14th.aspx For you guys that love other browsers that’s ok. Personally I love IE for many reasons such as ease of use and stability.  I am cranked up to see what IE 9 does as it was retooled from the start. So this one should be big. Also, its bringing HTML 5 support now so we can have much richer applications. Its about time that HTML was revved to move from the old text like stuff to a better model. More info: http://windowsteamblog.com/ie/b/ie/archive/tags/ie9/ Some glimpses here: http://windows.microsoft.com/en-US/internet-explorer/products/ie-9/features and http://www.beautyoftheweb.com/#/highlights/all-around-fast   Looks like it will be much faster (with hardware support now) in many areas.  Better startup times and install times are hot on my list of favorites too. Plus they retooled the UI in many places too.  The UI looks a lot cleaner now: http://windows.microsoft.com/en-US/internet-explorer/products/ie-9/features/focused-on-your-websites Plus theres tons more like changes in tab pages, a notfication bar, pinned sites and so forth. Plus theres cool integration with Windows 7 also.

    Read the article

  • How to Downgrade Razor 3 and fix the issue that CSHTML not work in VS10,12 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/how-to-downgrade-razor-3-and-fix-the-issue-that.aspxFew days ago I migrate a project to MVC 4 and suddenly I have seen that MVC project’s cshtml file is no longer working. The problem happen because my project is now based on Razor 3 RC and VS12 doesn’t even have support it. (Remember that VS team will ship support in VS update 4). My migration update it to Razor 3 (which is not related to MVC 4, MVC 4 used old version of Razor 2).   So how to fix the problem. Since VS update 4 in development and MVC 3 support exist in both old Version of VS (10,12) then better to migrate back our Razor to old version so we can use our project in VS 10 or 12. If your project have Razor 3 and it seem that Syntax highlighting doesn’t work for you then I suggest you to try this Nuget package https://www.nuget.org/packages/UpgradeMvc3ToMvc4 Remember that this will not be succeed. What you need to do is delete package folder in your project and now open the packages.config remove all entry of package now.   Now Run this command PM> Install-Package UpgradeMvc3ToMvc4 If this is failed then see what thing make error in console. simply remove the reference and try again. Now run it and see this will work.   After run this you will see that WebGrease Dll have a version number issue. Simply update it to version 1.5.2 and now you have ready your project to run it in .net 4. If you do bin deployment then you don’t need to have installed MVC 4 on server either. Remember that MVC 5 is based on .net 4.5 which simply means you can’t run it in VS10. until VS12 update 4 MVC 5 cshtml page will be work as simple html pages (syntax highlighting and intellisense). Thanks for read my post

    Read the article

  • Setting up a network between a host and guest virtual machine

    - by anonymous
    (I'm running ubuntu server 12.04 on virtual box) I'm trying to transfer a file (scp) from my laptop to one of the directories of a virtual machine. I tried sharing folders, but that failed. I'm a bit of a networking newbie. I've looked at like 20-30 pages. Here's one: http://www.howtoforge.com/moving-files-between-linux-systems-with-scp I followed those steps exactly. My problem is that when I try using scp, it just hangs. I'm also not sure which network interface to configure (eth0, eth1?) in the guest OS. Another (significant?) detail is that the inet address of eth0 is 10.0.2.15 instead of something like 192.168.x.y. I've enabled the bridge adapter and the host-only adapter. Both the laptop and guest VM have openssh-server installed. I'm not sure what to do at this point. Is there a better place to ask about this?

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >