Search Results

Search found 18450 results on 738 pages for 'website attacks'.

Page 222/738 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • All About Search Engine Position Optimization

    Search engine optimization or SEO is a method increasing the amount of traffic or hits to your website, which results in making your website rank high in search engine results. These results are produced whenever an individual types in a keyword or a set of keywords in a search query in search engines like Yahoo!, Google and the like. Being high on the list of search results matters a lot because it makes you more visible to the general public, especially to your target market. This differentiates you from your competitors who may rank low in the search results, or may not even appear in the results lists at all.

    Read the article

  • MySQL in ASP.NET: Mono using VB.NET

    In a previous tutorial titled ASP.NET Web Development and Hosting published October 25th you learned how to develop ASP.NET websites using Mono Project and deploy them in your existing Linux-Apache hosting account. The example ASP.NET mono website http www.dotnetdevelopment.net did not use a database at the time the tutorial was written. In this part you will learn how to connect and use a MySQL database with your ASP.NET mono project website.... Microsoft Exchange - IT peace of mind Big time solution. Small-stakes price. Get the White Paper now.

    Read the article

  • How to examine the speed of your code results?

    - by Goma
    Hi. Whatever was your choice PHP, ASP.NET, Ruby On Rails or even JSP. You know that you can develop a website to give a specific result or to do some tasks in many ways. I mean you can change your code to make it shorter (or for any other reason) but to give the same result. In this case how do you test which code was faster to excute so you choose it to make your website faster? I mean do you have any tools or ideas in how to test the time of execution for your code and compare it with time of execution after you do some edit?

    Read the article

  • Better control on code updates

    - by yes123
    I will briefly explain my situation. I have a website in PHP, this website is powered by a custom framework + some "plug-in" made ad hoc for it. I am the only developer of this. Until now I just test locally any changes than I upload the php files via FTP. I don't feel confortable anymore with this. The code base has grown quite a lot and I need some sort of system that helps to keep track of changes (line by line) and can restore to an old version easly if something goes wrong. Are there any good solution for this? Note: I never used something like version control or subversion because I think they are too much for this situation (I am the only developer and I just need basic feature) Note2: Something with a nice web interface would be perfect, I can pay for a good service too As now I found: http://beanstalkapp.com/ http://github.com/ http://www.codespaces.com/ http://codesion.com/ https://bitbucket.org/

    Read the article

  • Switching hosting from Google Apps to GoDaddy

    - by Sahir M.
    I am in a pickle here. I just bought a hosting service from GoDaddy and I already have a domain registered with Google Apps. So I went into the domain control center from Google Apps and I changed the nameservers to point to GoDaddy's and then I deleted all the A records and also removed the www and main CNAME. Then I created an A record with host "@" which points to the IP address I got from GoDaddy (this is the server address I see under the category Server in the GoDaddy Hosting Control Center). Also, I see that in this domain center, the domain forwarding is set to forward to my address "www.domainsite.com". So right now when I go to my website I see the error "This website is temporarily unavailable, please try again later." Does anyone know how to set this up properly or know what problem I could be having? Thanks!

    Read the article

  • What is the current legal status of magnet links?

    - by Moonwalker
    Prelude: I develop a cloud service which could be described as dropbox meets torrents and as side effect it enables distribution of arbitrary content via magnet links. Certain amount of magnet links will be displayed on the main website (I will be able to remove them one-by-one or ban users but no more). I will not be able to avoid magnets without complete rework of overall project architecture and either way it will hurt overall performance badly, probably making service meaningless. So my question is, what should I do, to avoid legal problems if my site in a nutshell is just a collection of magnet links? Privacy achieved via end-user encryption, so there is almost no access restrictions on the website. And anyway will help me any? Will hosting in particular country help me?

    Read the article

  • System that splits passwords across two servers

    - by Burning the Codeigniter
    I stumbled upon this news article on BBC, RSA splits passwords in two to foil hackers' attacks tl;dr - a (randomized) password is split in half and is stored across two separate servers, to foil hackers that gained access to either server upon a security breach. Now the main question is, how would this kind of system would be made... codespeaking, for PHP which I commonly develop on my web applications, the database password is normally stored in a configuration file, i.e. config.php with the username and password, in that case it is understandable that the passwords can be stolen if the security was compromised. However when splitting and sending the other half to the other server, how would this go on when making a communication to the other server (keeping in mind with PHP) since the other server password would be stored in a configuration file, wouldn't it? In terms of security is to keep the other server password away from the main one, just exactly how would the main server communicate, without exposing any other password, apart from the first server. This certainly makes me think...

    Read the article

  • Tab navigation and double content

    - by Guisasso
    I have a website in which i use tabs to navigate between pages. For example, page a displays A as an active tab and B and C background tabs. If the visitor gets to the website via page B, i also would like to display to page d, but not a and c. Question: I know i can just create index2 for b for example, so when the visitor gets to b from a, i display a,b,c and index1 when visitor gets to b from d for example. Is that a bad practice? I know double content isn't good, but in which other way can i or should i approach this problem? The tab navigation i designed uses < li and id tag do display active tab, defined in the < body tag.

    Read the article

  • What is a good design model for my new class?

    - by user66662
    I am a beginning programmer who, after trying to manage over 2000 lines of procedural php code, now has discovered the value of OOP. I have read a few books to get me up to speed on the beginning theory, but would like some advice on practical application. So,for example, let's say there are two types of content objects - an ad and a calendar event. what my application does is scan different websites (a predefined list), and, when it finds an ad or an event, it extracts the data and saves it to a database. All of my objects will share a $title and $description. However, the Ad object will have a $price and the Event object will have $startDate. Should I have two separate classes, one for each object? Should I have a 'superclass' with the $title and $description with two other Ad and Event classes with their own properties? The latter is at least the direction I am on now. My second question about this design is how to handle the logic that extracts the data for $title, $description, $price, and $date. For each website in my predefined list, there is a specific regex that returns the desired value for each property. Currently, I have an extremely large switch statement in my constructor which determines what website I am own, sets the regex variables accordingly, and continues on. Not only that, but now I have to repeat the logic to determine what site I am on in the constructor of each class. This doesn't feel right. Should I create another class Algorithms and store the logic there for each site? Should the functions of to handle that logic be in this class? or specific to the classes whos properties they set? I want to take into account in my design two things: 1) I will add different content objects in the future that share $title and $description, but will have their own properties, so, I want to be able to easily grow these as needed. 2) I will add more websites constantly (each with their own algorithms for data extraction) so I would like to plan efficienty managing and working with these now. I thought about extending the Ad or Event class with 'websiteX' class and store its functions there. But, this didn't feel right either as now I have to manage 100s of little website specific class files. Note, I didn't know if this was the correct site or stackoverflow was the better choice. If so, let me know and I'll post there.

    Read the article

  • Is it safe to Block These URLs with Robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: If I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Getting SSL certificate for a sub-domain

    - by Hemant
    Our company owns a domain say www.mycompany.com. I understand that it is trivial to get an SSL certificate for above domain since we do have a website running on that address. We want a certificate for a subdomain say sub.mycompany.com. We intend to use this sub-domain in our organisation network only and have no plans to publish a public website on this subdomain. So the question is "Is it necessary to have a DNS entry for subdomain, resolving to our IP address and host some page on that address?" I hope proving that main domain is in our control, we can get an SSL certificate for sub domain also. Is it possible?

    Read the article

  • Is there some application to download files from popular file hosting websites?

    - by Tim
    I was wondering if there are some applications for downloading files from some popular hosting websites, automating the procedure of waiting and fetching links and downloading files, once we give the applications the links? Examples of such websites are Rapidshare, Uploading, Megaupload, Filesonic, Fileserver, Hotfiles, Depositefiles, iFile. But the applications are not necessarily applicable to all of them. Thanks and regards! ADDED: I just tried slimrat. It failed to download files from rapidshare. Can it be because the website of rapidshare has changed recently and the parsing functionality for their website by slimrat is not up-to-date yet.

    Read the article

  • Google Webmaster Tools Index dropped to Zero [closed]

    - by Brian Anderson
    Earlier this year I rebuilt my website using ZenCart. Immediately I saw a drop in index status from 59 to 0. I then signed up for Google Webmaster Tools and noticed the Index status took a dramatic drop and has never recovered. I have worked to add content and I know I am not done, but have not seen any recovery of this index since. What confuses me is when I look at the sitemap status under Optimization it shows me there are 1239 submitted and 1127 pages indexed. Most of my pages have fallen off page one for relevant search terms and some are as far back as page 7 or 8 where they used to be on the first page. I have made some changes in the past week to robots.txt and sitemap.xml, but have not seen any improvements. Can anyone tell me what might be going on here? My website is andersonpens.net. Thanks! Brian

    Read the article

  • High Traffic Web Host Solution? [duplicate]

    - by Calsy
    Possible Duplicate: How to find web hosting that meets my requirements? Im currently shopping around for a web host for our website we are hoping to release in the near future. This is my first real step into this area. Just wondering what I should be looking for. It is an ASP.net MVC website with an MS SQL Server backend. I need to know that the server will not buckle if the traffic booms. Currently im looking at a managed dedicated server from singlehop. Does anyone know any better or have any advice.

    Read the article

  • Broken Links in different browsers

    - by kdorival
    Hi I'm having problems with our website, http://www.accessiblehomehealthcare.com, which is a wordpress 2.7 (version). All of a sudden our RSS links broke on the right side, which has happened before and I fixed it within 5 mins. Now, when I fix it, it doesn't look right in different version of I.E. or Firefox, I have I.E. 8 and Firefox 3.6.15 and it looks good for the most part, but there are a few parts where the links are broken. One browser the links would look ok but go to another page and the links or logos would be broken. Certain parts of the website should be static(identical) to the other pages of the site, but if a link is broken on one page, its perfect on another page. I was wondering was there a secret code for wordpress to keep the sites compatible with all browser versions or is there a bigger issue???? Any help or suggestions will help???

    Read the article

  • Is this safe? <a href=http://javascript:...>

    - by KajMagnus
    I wonder if href and src attributes on <a> and <img> tags are always safe w.r.t. XSS attacks, if they start with http:// or https://. For example, is it possible to append javascript: ... to the href and src attribute in some manner, to execute code? Disregarding whether or not the destination page is e.g. a pishing site, or the <img src=...> triggers a terribly troublesome HTTP GET request. Background: I'm processing text with markdown, and then I sanitize the resulting HTML (using Google Caja's JsHtmlSanitizer). Some sample code in Google Caja assumes all hrefs and srcs that start with http:// or https:// are safe -- I wonder if it's safe to use that sample code. Kind regards, Kaj-Magnus

    Read the article

  • How can I edit (in MS Expression Web ) FrontPage Site Parameters (Substitutions)?

    - by Clay Nichols
    Or, asked another way: Where are the values for MS Front Page Substitions (Site Parameters) stored? (so that I can edit in Web Expressions) Background I'm ashamed to admit that I've been maintaining our company's website in MS FrontPage for over 9 years. I'm moving it to Expression Web, which will display the Substitutions (stored as Site Parameters) but I can't figure out where to edit them. I tried searching the source folders for the website (on my development PC) for the name of the parameter (s-Variable=hoursOfOperation) but did not find it (other than in the files it was actually used in.

    Read the article

  • wordpress hosting uk help [closed]

    - by Neenee Kale
    Hi so i am planning to develop a website (student information system) for my final year project. I am going to use wordpress and i am a beginner so i just found out i have to purchase a host if i am not going to use wordpress.com as my host. which i dont want to as there is loads of limitations if i want to build a website. so therefore i want to purchase my own host which is cheap and i pay for a year i the most i will pay is 50 pounds. could someone please recommend me a very good uk based word press host which will allow me to build a information system where people will be able to login and enter their details etc. I have researched many hosts by I need someone to recommend me what features are important to build a information system like this. I am a beginner in wordpress so therefore i dont have much idea on hostings.. thank you

    Read the article

  • Beta Opportunity Soon to Close on "Oracle Database 12c: Installation and Administration" (1Z0-062)

    - by Brandye Barrington
    This is your last opportunity to take the "Oracle Database 12c: Installation and Administration" (1Z1-062) beta exam at the greatly discounted rate of $50 USD, as well as be one of the first to hold this new OCA certification. Earning the Oracle Database 12c Administrator Certified Associate credential gives you the foundational knowledge and skills needed to administer the Oracle Database and sets the stage for your future progression to Oracle Database 12c Administrator Certified Professional.  You can get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification website. Visit pearsonvue.com/oracle and register for exam 1Z1-062. QUICK LINKS: Certification Track: Oracle Database 12c Administrator Certified Associate Certification Exam: "Oracle Database 12c: Installation and Administration" (1Z1-062) Recommended Training: Oracle Database 12c: Admin, Install and Upgrade Accelerated, Oracle Database 12c: Install and Upgrade Workshop, or Oracle Database 12c: Administration Workshop Certification Website: About Beta Exams Register Now: Pearson VUE

    Read the article

  • Web stalker has purchased a domain name that uses my personal name, web page is defamatory [closed]

    - by Deborah Morse-Kahn
    We have been unsuccessful in persuading a stalker's website host to release the domain name he purchased which is my own personal name, e.g., PERSONALNAME.com. You will find my name below in the signature area. Look for yourself. On the one page that this domain name leades to is dreadful and defamatory material. No attorney has felt it worth their time to chase this issue down, and we cannot afford to go to a national or international dispute resolution group to bring this issue to WHOIS. Worse, the stalker is amoral and a psychopath: he would just love the attention. We've even consider trying to find someone to illegally hack into the webpage to at least redirect the domain pointers to my own professional website. This issue has continued now for two years and is affecting my professional reputations as potential clients have looked for me online. Is there any remedy? Your help and advice would be greatly welcomed.

    Read the article

  • Legality of copying information from a Wikia? [closed]

    - by Sergio Tapia
    I'm making a website that will act as a compendium for a video game. I noticed that a page from Wikia has a ton of useful content such as background stories and trivia for many of the "entities" in the game. According to their site: The text on Wikia sites is licensed under the Creative Commons Attribution-Share Alike License 3.0 (Unported) (CC-BY-SA). I'm not a lawyer; so what is the legality of copying and pasting the information written there collaboratively by the hivemind, into my website? If needed I would not be opposed to pasting the content and providing a source link at the end of each paragraph. Can I paste the content into my site? Thanks for the help!

    Read the article

  • ecommerce item deleted by user, 301 rediret to HOME PAGE or 404 not found?

    - by Marco Demaio
    I know this question is someway similar to this one where they reccomend using 404, but after reading this other one where they suggest to use 301 when changing site urls (in the specific case was due to redesign/refactoring) I get a bit of confused and I hope someone could clarify for this specific example: Let's say I have an ecommerce site, let's also say the final user inserted some interesting items in the site and the ecommerce webapp created the item pages at the urls: http://...?id=20, http://...?id=30 etc. Now let's say some of these interesting items got many external links toward them from many other sites because some people found those items very interesting and linked to them. After some years the final user deletes those items, so obviously the pages/urls http://...?id=20, http://...?id=30, etc. now do not exist anymore, but still many pages on the web are linking toward them. What should the ecommerce site do now, just show a 404 page for those items? But, I'm confused, wouldn't this loose all the Google PR passed by the external links to the items pages? So isn't it better to use 301 redirect to HOME PAGE that at least passes the PR to the HOME PAGE? Thanks, EDIT: Well, according to answeres the best thing to do so far is to do a 404/410. In order to make this question more complete, I would like to talk about a special case, just to make sure I understood. properly. Let's say the user creates those items again (the ones he previously deleted at point 4), maybe he changes a bit their names and description, but they are basically the same items. The webapp has no way to know these new added items were the old items so it obviously create them as new items with new urls http://...?id=100, http://...?id=101, does it makes sense at this point to redirect 301 the old urls to the new ones? MORE EDIT (It would be VERY IMPORTANT TO UNDERSTAND): Well according to the clever answers received so far it seems for the special case, explained in my last EDIT, I could use 301, since it's something of not deceptive cause basically the new pages is a replacement for the old page in term of contents. This is basically done to keep the PR passed from external link and also for better user experience. But beside the user experince, that is discussible (*1), in order to preserve PR from external broken linlks why not just always use 301, In my understanding Google dislikes duplicated contents, but are we sure that 301 redirect to HOME PAGE is seen as duplicated contents for Google?! Google itself suggests to redircet 301 index.html to document root so if they consider 301 as duplicated contents wouldn't that be considered duplicated contents too?! Why do they suggest it? Let me provoke you: “why not just add a 301 to HOME PAGE for every not found page?” (*1) as a user, when I follo a broken url from some external link to some website's page I would stick more on this website if I get redirected to HOME PAGE rather than seeing a 404 page where I would think the webiste does not even exist anymore and maybe I don't even try to go to HOME PAGE of the website.

    Read the article

  • Why is better to use external JavaScript or libraries ; and is it prefered to use jquery meaning more security?

    - by shareef
    I read this article Unobtrusive JavaScript with jQuery and I noticed these points in the slide page 11 some companies strip JavaScript at the firewall some run the NoScript Firefox extension to protect themselves from common XSS and CSRF attacks many mobile devices ignore JavaScript entirely screen readers do execute JavaScript but accessibility issues mean you may not want them to I did not understand the fourth point. What does it mean? I need your comment and responses on these points. Is not using JavaScript and switching to libraries like jQuery worth it? UPDATE 1 : whats the meaning of Unobtrusive JavaScript with jQuery ? and yes it does not say we should use libraries but we should have them on external files for that reason i asked my question.

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >