Search Results

Search found 14037 results on 562 pages for 'alter index'.

Page 298/562 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Google Analytics - bad experiences? (esp. adult content)

    - by Litso
    Hello all, I work for a rather large adult website, and we're currently not using Google Analytics. There is an internal debate going on about whether we should start using Analytics, but there is hestitation from certain parties. The main argument is that they fear that Google will get too much insight into our website, and might even block us from the index as a result based on our adult content. Has anyone here ever had such an experience, or know stories about bad experiences with Google Analytics in such a manner? I personally think it will only improve our website if we were able to use Analytics, but the dev team was asked to look into possible negative effects. Any help would be appreciated.

    Read the article

  • Software Architecture Analysis Method (SAAM)

    Software Architecture Analysis Method (SAAM) is a methodology used to determine how specific application quality attributes were achieved and how possible changes in the future will affect quality attributes based on hypothetical cases studies. Common quality attributes that can be utilized by this methodology include modifiability, robustness, portability, and extensibility. Quality Attribute: Application Modifiability The Modifiability quality attribute refers to how easy it changing the system in the future will be. This to me is a very open-ended attribute because a business could decide to transform a Point of Sale (POS) system in to a Lead Tracking system overnight. (Yes, this did actually happen to me) In order for SAAM to be properly applied for checking this attribute specific hypothetical case studies need to be created and review for the modifiability attribute due to the fact that various scenarios would return various results based on the amount of changes. In the case of the POS change out a payment gateway or adding an additional payment would have scored very high in comparison to changing the system over to a lead management system. I personally would evaluate this quality attribute based on the S.O.I.L.D Principles of software design. I have found from my experience the use of S.O.I.L.D in software design allows for the adoption of changes within a system. Quality Attribute: Application Robustness The Robustness quality attribute refers to how an application handles the unexpected. The unexpected can be defined but is not limited to anything not anticipated in the originating design of the system. For example: Bad Data, Limited to no network connectivity, invalid permissions, or any unexpected application exceptions. I would personally evaluate this quality attribute based on how the system handled the exceptions. Robustness Considerations Did the system stop or did it handle the unexpected error? Did the system log the unexpected error for future debugging? What message did the user receive about the error? Quality Attribute: Application Portability The Portability quality attribute refers to the ease of porting an application to run in a new operating system or device. For example, It is much easier to alter an ASP.net website to be accessible by a PC, Mac, IPhone, Android Phone, Mini PC, or Table in comparison to desktop application written in VB.net because a lot more work would be involved to get the desktop app to the point where it would be viable to port the application over to the various environments and devices. I would personally evaluate this quality attribute based on each new environment for which the hypothetical case study identifies. I would pay particular attention to the following items. Portability Considerations Hardware Dependencies Operating System Dependencies Data Source Dependencies Network Dependencies and Availabilities  Quality Attribute: Application Extensibility The Extensibility quality attribute refers to the ease of adding new features to an existing application without impacting existing functionality. I would personally evaluate this quality attribute based on each new environment for the following Extensibility  Considerations Hard coded Variables versus Configurable variables Application Documentation (External Documents and Codebase Documentation.) The use of Solid Design Principles

    Read the article

  • How to cleanly add after-the-fact commits from the same feature into git tree

    - by Dennis
    I am one of two developers on a system. I make most of the commits at this time period. My current git workflow is as such: there is master branch only (no develop/release) I make a new branch when I want to do a feature, do lots of commits, and then when I'm done, I merge that branch back into master, and usually push it to remote. ...except, I am usually not done. I often come back to alter one thing or another and every time I think it is done, but it can be 3-4 commits before I am really done and move onto something else. Problem The problem I have now is that .. my feature branch tree is merged and pushed into master and remote master, and then I realize that I am not really done with that feature, as in I have finishing touches I want to add, where finishing touches may be cosmetic only, or may be significant, but they still belong to that one feature I just worked on. What I do now Currently, when I have extra after-the-fact commits like this, I solve this problem by rolling back my merge, and re-merging my feature branch into master with my new commits, and I do that so that git tree looks clean. One clean feature branch branched out of master and merged back into it. I then push --force my changes to origin, since my origin doesn't see much traffic at the moment, so I can almost count that things will be safe, or I can even talk to other dev if I have to coordinate. But I know it is not a good way to do this in general, as it rewrites what others may have already pulled, causing potential issues. And it did happen even with my dev, where git had to do an extra weird merge when our trees diverged. Other ways to solve this which I deem to be not so great Next best way is to just make those extra commits to the master branch directly, be it fast-forward merge, or not. It doesn't make the tree look as pretty as in my current way I'm solving this, but then it's not rewriting history. Yet another way is to wait. Maybe wait 24 hours and not push things to origin. That way I can rewrite things as I see fit. The con of this approach is time wasted waiting, when people may be waiting for a fix now. Yet another way is to make a "new" feature branch every time I realize I need to fix something extra. I may end up with things like feature-branch feature-branch-html-fix, feature-branch-checkbox-fix, and so on, kind of polluting the git tree somewhat. Is there a way to manage what I am trying to do without the drawbacks I described? I'm going for clean-looking history here, but maybe I need to drop this goal, if technically it is not a possibility.

    Read the article

  • 11.10 - Update Manager Not working

    - by Mattlinux1
    W:Failed to fetch cdrom://Ubuntu 11.10 Oneiric Ocelot - Release i386 (20111012)/dists/oneiric/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 11.10 Oneiric Ocelot - Release i386 (20111012)/dists/oneiric/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , E:Some index files failed to download. They have been ignored, or old ones used instead. This happens when i hit the check button? and the updates were working before.

    Read the article

  • Will a rel=canonical link pointing to a 301 redirect pass less pagerank than one without a 301?

    - by tobek
    On this official Google page about canonical links it says: Can rel="canonical" be a redirect? Yes, you can specify a URL that redirects as a canonical URL. Google will then process the redirect as usual and try to index it. There is no mention that this might dilute the impact of the canonical link. However, Google has made clear elsewhere that 301 redirects do dilute PageRank - roughly as much as a link dilutes PageRank. Is that relevant here? I'm assuming the answer is "no" but I wanted to confirm. Relevant but not duplicate: Does Rel=Canonical Pass PR from Links or Just Fix Dup Content.

    Read the article

  • Failed to download repository information

    - by Bob Van Elst
    When i clicked check this error message came up. But it does not come up when update manager strarts automatically. When you open update manager this error comes up. Any ideas on how to fix it? Details: *W:GPG error: (http://ppa.launchpad.net precise Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY FC8CA6FE7B1FEC7C, W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precise/main/binary-amd64/Packages 404 Not Found , W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found , W:Failed to fetch (http://ppa.launchpad.net/jonabeck/ppa/ubuntu/dists/precide/main/source/Sources 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead.*

    Read the article

  • What is the proper way to create a cross-fade effect? [closed]

    - by Starx
    When creating an image slider, using a cross fade is one of more popular effects. Various sliders use differing techniques to create such an effect. Two techniques I've found so far are: Use an overlay and underlay <div> and fade in and out each other's visibility. Create a <div> matching the exact size of the slider during initialization, play with its z-index property, and then fade each other. Is there a better way to create this effect?

    Read the article

  • Redirect packages directed to port 5000 to another port

    - by tdc
    I'm trying to use eboard to connect to the FICS servers (http://www.freechess.org), but it fails because port 5000 is blocked (company firewall). However, I can connect to the server through the telnet port (23): telnet freechess.org 23 (succeeds) telnet freechess.org 5000 (fails) Unfortunately the port number is hardcoded (see here: http://ubuntuforums.org/archive/index.php/t-1613075.html). I'd rather not have to hack the source code as the author of that thread ended up doing. Can I just forward the port on my local machine using iptables? I tried: sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 5000 -j REDIRECT --to-port 23 and sudo iptables -t nat -I OUTPUT --src 0/0 -p tcp --dport 5000 -j REDIRECT --to-ports 23 but these didn't work... Note that: $ sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:5000 redir ports 23 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:5000 redir ports 23 Chain POSTROUTING (policy ACCEPT) target prot opt source destination

    Read the article

  • Capture a Query Executed By An Application Or User Against a SQL Server Database in Less Than a Minute

    - by Compudicted
    At times a Database Administrator, or even a developer is required to wear a spy’s hat. This necessity oftentimes is dictated by a need to take a glimpse into a black-box application for reasons varying from a performance issue to an unauthorized access to data or resources, or as in my most recent case, a closed source custom application that was abandoned by a deserted contractor without source code. It may not be news or unknown to most IT people that SQL Server has always provided means of back-door access to everything connecting to its database. This indispensible tool is SQL Server Profiler. This “gem” is always quietly sitting in the Start – Programs – SQL Server <product version> – Performance Tools folder (yes, it is for performance analysis mostly, but not limited to) ready to help you! So, to the action, let’s start it up. Once ready click on the File – New Trace button, or using Ctrl-N with your keyboard. The standard connection dialog you have seen in SSMS comes up where you connect the standard way: One side note here, you will be able to connect only if your account belongs to the sysadmin or alter trace fixed server role. Upon a successful connection you must be able to see this initial dialog: At this stage I will give a hint: you will have a wide variety of predefined templates: But to shorten your time to results you would need to opt for using the TSQL_Grouped template. Now you need to set it up. In some cases, you will know the principal’s login name (account) that needs to be monitored in advance, and in some (like in mine), you will not. But it is VERY helpful to monitor just a particular account to minimize the amount of results returned. So if you know it you can already go to the Event Section tab, then click the Column Filters button which would bring a dialog below where you key in the account being monitored without any mask (or whildcard):  If you do not know the principal name then you will need to poke around and look around for things like a config file where (typically!) the connection string is fully exposed. That was the case in my situation, an application had an app.config (XML) file with the connection string in it not encrypted: This made my endeavor very easy. So after I entered the account to monitor I clicked on Run button and also started my black-box application. Voilà, in a under a minute of time I had the SQL statement captured:

    Read the article

  • Kooboo CMS 2.1.1.0 released

    New features Add new API RssUrl to generate RSS link, this is an extension to UrlHelper. Add possibility to index and search attachment content on Lucene full text search engine, some of the attachment requires ifilter component from Microsoft. Supported file attachments include: .docx, .docm, .pptx, .pptm, .xlsx, .xlsm, .xlsb, .zip, .one, .vdx, .vsd, .vss, .vst, .vdx, .vsx, and .vtx.Please download and install ifilter from: http://www.microsoft.com/downloads/details.aspx?FamilyId=60C92A37-719C-4077-B5C6-CAC34F4227CC&displaylang=enFor...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • T-SQL Tuesday: What kind of Bookmark are you using?

    - by Kalen Delaney
    I’m glad there is no minimum length requirement for T-SQL Tuesday blog posts , because this one will be short. I was in the classroom for almost 11 hours today, and I need to be back tomorrow morning at 7:30. Way long ago, back in SQL 2000 (or was it earlier?) when a query indicated that SQL Server was going to use a nonclustered index to get row pointers, and then look up those rows in the underlying table, the plan just had a very linear look to it. The operator that indicated going from the nonclustered...(read more)

    Read the article

  • Join Us at Oracle OpenWorld Latin America (Dec 4-6)

    - by Zeynep Koch
    Hello to all Latin Americans,  Oracle Openworld Latin America is starting tomorrow. Oracle Linux will be showcased in different sessions and in the exhibition area. Here's some of the links and details to our sessions: Session Schedules: http://www.oracle.com/openworld/lad-en/session-schedule/index.html Oracle Linux sessions: New Features in Oracle Linux: A Technical Deep Dive,    Dec 4, 13:30-14:30, Mezzanine Room 7 Oracle Linux Strategy and Roadmap,   Dec 4, 17:15-18:15, Mezzanine Room 5 Oracle OpenWorld Latin America Exhibition Halls Hours Tuesday, December 4 12:00–19:3018:15–19:30 (Dedicated Hours)Wednesday, December 511:00–19:3018:30–19:30 (Dedicated Hours)Thursday, December 6 11:00–19:0017:45–19:00 (Dedicated Hours) We will also hand out the following in our booth, don't forget to visit us: - Oracle Linux and Oracle VM DVD Kit  - Server Virtualization for Dummies  See you there :)

    Read the article

  • not getting updates

    - by gknarayana
    when i check for updates the message is "W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release i386 (20120423)/dists/precise/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 12.04 LTS _Precise Pangolin_ - Release i386 (20120423)/dists/precise/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 11.10 _Oneiric Ocelot_ - Release i386 (20111012)/dists/oneiric/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , W:Failed to fetch cdrom://Ubuntu 11.10 _Oneiric Ocelot_ - Release i386 (20111012)/dists/oneiric/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs , E:Some index files failed to download. They have been ignored, or old ones used instead." please suggest what i should do

    Read the article

  • "Unable to connect" to getdeb.net, how do I fix it?

    - by Nirmik
    i want to know what this error means and how to fix it? The following is the output on the terminal- W: Failed to fetch http://archive.getdeb.net/ubuntu/dists/precise-getdeb/Release.gpg Unable to connect to archive.getdeb.net:http: W: Failed to fetch http://archive.getdeb.net/ubuntu/dists/precise-getdeb/apps/binary-i386/Packages Unable to connect to archive.getdeb.net:http: W: Failed to fetch http://archive.getdeb.net/ubuntu/dists/precise-getdeb/apps/i18n/Translation-en_IN Unable to connect to archive.getdeb.net:http: W: Failed to fetch http://archive.getdeb.net/ubuntu/dists/precise-getdeb/apps/i18n/Translation-en Unable to connect to archive.getdeb.net:http: E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • is it ok to have 2 sitemaps on 1 website?

    - by user615041
    Do I have to have a sitemap page on my index page for bots to read it or can I just have it anywhere on my server? I have a phpbb/wordpress integration and I need 2 sitemaps mods for each one (or I need to have them somehow integrated together into one xml sitemap). Is this possible? Whats my best option? I would have the phpbb one something like this: http://www.example.com/phpbb/sitemap.html and the wordpress one something like this: http://www.example.com/wordpress/sitemap.html and then I would submit both off..but not have the links on my footer to confuse anyone.., the sitemaps would strictly be for search engines. Is this a good idea? what are you thoughts?

    Read the article

  • Hosting a magnet link site which could possibly infringe copyrighted material?

    - by Griff
    I have for the last 3 months built a crawler, indexer and alot of other things for what started out to be a home project for indexing magnet links on the internet. As my project grew I have thought about releasing my collected data (which at the minute is on a public domain but with no access) to the public. Whatever the crawler sucks in goes in, and whatever the indexer decides to index gets indexed as it is a fully automated process. My question is as follows; Considering that most of the data that is collected from what I have built points to illegal copyrighted material (as most magnet links do) where would it be best to host such a site. I notice all of the already public torrent sites are hosted in India is this because there laws are less strict on copyright infringement? Have any of you hosted such a site, and if so what problems have you ran into? And as always any advice on being a webmaster for this type website?

    Read the article

  • Git doesn't sync files until committed, even if checked out in a different branch

    - by DertWaiter
    Okay, I have git 1.7.11.1 on Windows and I have a local test repository with 2 branches. One is master with index.php and help.php. I then create another branch called slave :) I run from git bash rm help.php and it disappears from the folder, but I don't stage anything. I switch to checkout master branch and it is supposed to restore file help.php because it is not modified in the master branch, isn't it? And it does not do it. When I go back to the slave branch and commit and then switch to checkout master then help.php appears. Is that the way it is supposed to to work? Why?

    Read the article

  • How to direct a Network Solutions domain name to an html website hosted on Google Drive? [on hold]

    - by Air Conditioner
    To begin with, I'd wanted to take advantage of HTML, CSS, and so on to build a website that looks and works just as I'd like it to. I took a look around on how I could make that work, and I soon saw a lifehacker article showing that its possible to host website files on google drive. I then made sure that the folder containing the files was shared publicly throughout the web, and I now have a working 'google drive hosted' domain for the website. However, I did want to have the custom domain, and so I registered one with network solutions. So now, I'm curious on how I should direct my Network Solutions domain to the index.html I'm hosting on google drive. Would anyone have an Idea?

    Read the article

  • How to recommend that Google indexes some keywords?

    - by Werewolf
    I've read many articles about SEO. I've tried to implement my knowledge on a site but I haven't gotten good results in 6 months. e.g.: I've used Google Webmaster Tools, sitemaps, title tags, keywords in paragraphs, etc. My Alexa rank is growing but Google detected some keywords that isn't my goal :-(. Is there a good way to focus on a keyword on search engines? How can I recommend Google to index some desired keywords? (They are available in my pages.)

    Read the article

  • Flash site loads slowly

    - by bogdanvursu
    I have a simple html page that embeds an swf, that downloads other xml, swf and image files. The total count of the requests reaches about 90. I am aware that it should take a while until the content is available and I am OK with that. All the needed files are hosted by two different providers in the US: flashxml.net/monochrome-demo.html and u1.flashcomponents.net/samples/8751/index.html From two different countries in Europe, the content shows up a lot later (almost twice as later) from flashxml, than flashcomponents. I've done mtr tests and the ping difference is about 40ms and the flashxml server load is below 1. Do you have any other suggestions as to what should I look at?

    Read the article

  • Update information outdated, "Failed to fetch cdrom"

    - by user285603
    I have a warning triangle on the top of my screen. When I click on it, it says that my update information is outdated. When I type sudo apt-get update && sudo apt-get upgrade into a terminal, I get this message: W: Failed to fetch cdrom://Ubuntu 14.04 LTS _Trusty Tahr_ - Release i386 (20140417)/dists/trusty/main/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs W: Failed to fetch cdrom://Ubuntu 14.04 LTS _Trusty Tahr_ - Release i386 (20140417)/dists/trusty/restricted/binary-i386/Packages Please use apt-cdrom to make this CD-ROM recognized by APT. apt-get update cannot be used to add new CD-ROMs E: Some index files failed to download. They have been ignored, or old ones used instead. Any ideas?

    Read the article

  • Wordpress Multisite Network installation and dev questions

    - by Daitya
    Please go easy on me. I'm a clutzy dinosaur. I currently have a large, unwieldy website hand-coded in html/css with php includes. It currently has a single WP installation in a subdirectory. The plan is to reorganize, and I want to use WP as the CMS and incorporate 3 WP blogs for 3 subdomains. Ideally, would like to create a WP multisite network to allow for further expansion and to save admin trouble. I just want to confirm that if I install WP in the root directory and create 3 blogs (in subdomains), does this mean my website's home page is the mother blog's index.php? Essentially, I will have created 4 blogs - mother at root and 3 children in subdomains? How to set this up on my Mac (OSX 10.5.8) running MAMP for development? And then how to migrate to server without breaking?

    Read the article

  • Google Analytics - bad experiences? (esp. adult content)

    - by Litso
    I work for a rather large adult website, and we're currently not using Google Analytics. There is an internal debate going on about whether we should start using Analytics, but there is hestitation from certain parties. The main argument is that they fear that Google will get too much insight into our website, and might even block us from the index as a result based on our adult content. Has anyone here ever had such an experience, or know stories about bad experiences with Google Analytics in such a manner? I personally think it will only improve our website if we were able to use Analytics, but the dev team was asked to look into possible negative effects. Any help would be appreciated.

    Read the article

  • install and update issue

    - by Newben
    I get some error messages as soon as I try to install or update packages : ... W: Failed to fetch http://fr.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en_US Something wicked happened resolving 'fr.archive.ubuntu.com:http' (-5 - No address associated with hostname) W: Failed to fetch http://fr.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Something wicked happened resolving 'fr.archive.ubuntu.com:http' (-5 - No address associated with hostname) E: Some index files failed to download. They have been ignored, or old ones used instead. ... I tried to find something by googling but I didn't find any satisfactory response. Anybody has an idea ?

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >