Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 399/1180 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • how to install 13.04 on a partitioned hardrive

    - by Denny
    First, not a computer literate person, not even a novice- so please use small words. I recently made the switch to ubuntu, it came preloaded on my new laptop that I order from a big tech dot com site. The version on it is 12.04 (i think) and 64bit. This system has a lot that I like but it is quirky for me to say the least. Apparently I have held broken packages and have no way of knowing how to find them. I discovered this when trying to download (from software center) VLC so that I could watch some movies I had on an external hard-drive. Unmet dependencies error and held broken package errors abound while trying to fix the problem. Ive scoured this site and other and followed almost all the suggestions to a T but still I am unable to fix anything. My computer is partitioned (but I don't even know how to get to the otherside so to speak). I would like to know; can I put the newer 13.04 OS on one side of the partition and then delete the older version on the other side? or, can I install 13.04 over the existing 12.04? What would I need to do this? An obstacle that I have is this, I am currently serving in Afghanistan so going someplace to buy something or running down to a computer store for service support is out of the question. I very much appreciate your help, cause right now this computer is nothing more than a word processor, which would be fine if all i wanted was a word processor. Thanks in advance.

    Read the article

  • Google Mini Search Does Not Return Results

    - by James Lawruk
    I pointed the Google Mini to a small intranet site. (8 simple HTML pages) It crawls all 8 pages just fine, but a simple search with a word clearing contained within the body of the pages returns 0 results. If I enter a search like this: "site:mysite.net", it returns all 8 pages. If I enter a search with a word contained in a Url, it will return that Url. In the search results, only the Url is returned in the list items. There is no Page Title or blurb text you normally see in Google search results. It's as if it is only indexing the Url and skipping the page title, body, etc. I have an old software Version 3.4.14, but I wanted to try to get this thing working without an upgrade. It worked before for another domain, so why would it not work now. Any ideas?

    Read the article

  • Transfer page from internal to external

    - by Theo Gulland
    Afternoon all! Currently I have a website with a list of audio products (essentially a search engine for audio deals). http://www.soundplaza.co.uk Once you go to the details page, you can then press the 'view deal' button to go to providers site e.g. = http://www.soundplaza.co.uk/all-deals/113/bookshelf-speakers/acoustic-energy-1 This jump between two sites is a bit harsh and I would like to show a transition page, to simply ease them into another site and not scare them off. Within this tradition page I will have a simple loading gif and some graphics showing that your transferring. QUESTION: What is the best way to send the details (link, product name etc) to this transfer page, to then wait 5 seconds, to then move on to the desired link... this can in NO WAY damage my SEO, if anything rel="nofollow" would be great if possible. Currently I have seen that you can submit form to the transition page, then you can use php sleep and then php header to transfer... however I am not sure if php header will transfer SEO value tot he provider? Any opinions would be great! Thanks

    Read the article

  • Installing both lxml 3.1.2 and lxml2 on ubuntu 12.04

    - by wgw
    I asked this on SO: http://stackoverflow.com/questions/19852911/lxml-3-1-2-and-lxml2-both-on-ubuntu/19856674#19856674 But it is perhaps more appropriate for AskUbuntu. So here it is again, reformulated. On the lxml site they suggest that it is possible to have both lxml2 and the newest version of lxml on ubuntu: Using lxml with python-libxml2 If you want to use lxml together with the official libxml2 Python bindings (maybe because one of your dependencies uses it), you must build lxml statically. Otherwise, the two packages will interfere in places where the libxml2 library requires global configuration, which can have any kind of effect from disappearing functionality to crashes in either of the two. To get a static build, either pass the --static-deps option to the setup.py script, or run pip with the STATIC_DEPS or STATICBUILD environment variable set to true, i.e. STATIC_DEPS=true pip install lxml The STATICBUILD environment variable is handled equivalently to the STATIC_DEPS variable, but is used by some other extension packages, too. I am generally confused about how pip packages and ubuntu packages get along, so I hesitate to run STATIC_DEPS=true pip install lxml. Will it damage/confuse my installed lxml2 package? The suggestion on SO was to install the new lxml in a virtualenv. That looks like the best way to go, but the lxml site is suggesting that a dual installation will work also. In general: what happens if I use pip (to get a newer install) for a package that is already installed by apt-get?

    Read the article

  • Super slow website - show me what's been downloaded so far.

    - by Mick
    Every now and then a website becomes super-slow (but not broken) because there are too many people looking at it at the same time. When I try and view such a site, say with firefox, I can see that it is downloading all sorts of components of the site because of the progress information printed at the bottom of the window and I'm sitting there thinking "If only the browser would show me what it's got so far. I don't care if its a jumbled mess, I just want to see what you've got". Does any browser offer such an option?

    Read the article

  • MySql service stops under 2008 r2 x64

    - by volody
    I have installed MySql 5.5 server under windows 2008r2 x64 Apparently I can see that MySql service stops even if is configured to start automatically What can I do to find out why this is happening? MySql database is used as backend of ASP.Net web site Is it possible that web site was not active for a while and system stop mysql service? Update: It was mysql-5.5.7-rc-winx64. I could be an issue with this version (release candidate). Now I am trying to install mysql-5.5.8-winx64 And I have an issue with configuring MySql to work using name pipes I did uncheck use of TCP/IP protocol and configuration wizard just hangs Update: I have found workaround. It is required to configure MySql to use TCP/IP first, then reconfigure to use named pipes It looks like this link also has some information about the possible problems How should I diagnose ERROR 1045 during MySQL installation?

    Read the article

  • IIS Url Rewrite Capturing query string and escaping characters

    - by LiamB
    We are just adding some redirects for an old site to a new one in IIS7 using the URL Rewrite 'plugin'. The old site's URL are all based on the query string, we'd usually do explicit rewrites like below. But this wont work in the case of the query string. <rule name="Redirect-1" patternSyntax="Wildcard" stopProcessing="true"> <match url="index.php?option=m_content&view=article&id=15&Itemid=16" /> <action type="Redirect" url="http://newurl/some-page" /> </rule> So using the 2 URL's above how can we do a 301 redirect?

    Read the article

  • Branded Application Pages (layouts pages) in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Application pages are now branded by default in SharePoint 2010. WOOHOO!!! The DynamicMasterPageFile attribute in SharePoint 2010 master pages allows application pages start using the site’s master page instead of the application master page. If you want backwards compatibility with SharePoint 2007, i.e. you want unbranded application pages, here is what you can do, a) You can change the MasterPageReferenceEnabled property to false in your SPWebApplication object, orb) Go to central administration\application management\manage web application\select your web app … go to the ribbon, look for general settings\general settings, and detach application pages from the site’s master page. I don’t see why you’d ever wanna do that, but hey if you want to .. go for it. This article was first published on blah.winsmarts.com. Stealing content is not cool. Safeguarded application pages Now for the fine print, there is something called as “Safeguarded application pages” in SP2010. These are pages, that IF IN CASE your custom master page screws up, they will automatically revert to use a master page that is guaranteed to work in the _layouts folder. Now that’s nice! That means, if you screw up, you always have a way to fix things. How nice! Here is a list of such safe guarded application pages - AccessDenied.aspx MngSiteAdmin.aspx People.aspx RecycleBin.aspx ReGhost.aspx ReqAcc.aspx Settings.aspx UserDisp.aspx ViewLsts.aspx Have fun! Comment on the article ....

    Read the article

  • Why does everybody hate SharePoint?

    - by Ryan Michela
    Reading this topic about the most over hyped technologies I noticed that SharePoint is almost universally reviled. My experience with SharePoint (especially the most recent versions) is that it accomplishes it's core competencies smartly. Namely: Centralized document repository - get all those office documents out of email (with versioning) User-editible content creation for internal information disemination - look, an HR site with current phone numbers and the vacation policy Project collaboration - a couple clicks creates a site with a project's documents, task list, simple schedule, threaded discussion, and possibly a list of all project related emails. Very basic business automation - when you fill out the vacation form, an email is sent to HR. My experience is that SharePoint only gets really ugly when an organization tries to push it in a direction it isn't designed for. SharePoint is not a CRM, ERP, bug database or external website. SharePoint is flexible enough to serve in a pinch, but it is no replacement for a dedicated tool. (Microsoft is just as guilty of pushing SharePoint into domains it doesn't belong.) If you use SharePoint for what it's designed for, it really does work. Thoughts?

    Read the article

  • Windows XP runs New Hardware Wizard for usb keyboard and mouse, can't find drivers

    - by Randy Orrison
    I have a PC that up until a couple days ago was working fine. I moved it from one site to another and now when I plug in the USB mouse or keyboard (the same ones that were working previously) XP brings up the New Hardware Wizard. Going through it, the correct keyboard and mouse are identified, but XP can't find the drivers. I've tried manually searching for the driver (using the Have Disk option) - the first file it's looking for is in the c:\i386 directory, but that installs a generic HID mouse device; the system then runs the hardware wizard for a new "unknown" device. The system was SP2, I have installed SP3 in hopes that would help, and I've also downloaded and installed the mouse drivers from Dell's site (there are no specific drivers for the keyboard), with no change. Before I completely reinstall XP, is there anything else I should try?

    Read the article

  • GitHub: Are there external tools for managing issues list vs. project backlog

    - by DXM
    Recently I posted one of my the projects1 on GitHub and as I was exploring capabilities of the site, I noticed they have a rather decent issue tracking section. I want to use that section as a) other people can report bugs if they'd like and b) other people can see which bugs I'm aware of. However, as others have noted, issues list cannot be prioritized in order to create a project backlog. For now my backlog has been a text file, but I'd like to be able to have it integrated so the same information isn't maintained in different places. Having a fully ordered list, which is something we also practice at work, has been very useful as I can open one file, start with line 1 and fire off 2 or 3 items in one sitting without having to go back to a full issues/stories bucket. GitHub doesn't offer this. What GitHub does offer is a very nice and clean API so issues can easily be exported into anything else. I've searched to see if there are other websites (like Trello) that integrate with GitHub issues, but did not find anything. Does anyone know of such a product, service or offline tool? Those that use GitHub, what is your experience in managing backlog? I kinda hate the idea of manually managing two disconnected lists like some people seem to be doing with Wiki project pages. 1 - are shameless plugs allowed no this site? Searched but didn't find a definite answer. If it's bad practice, STOP and don't read further As a developer I got sick and tired of navigating to same set of folders 30 times a day, so I wrote a little, auto-collapsible utility that gets stuck to the desktop and allows easy access to the folders you constantly use.

    Read the article

  • Internet Explorer will not open Office files

    - by geekrutherford
    An issue was brought to my attention today at work where certain users were unable to open Office files (specifically Excel) from Internet Explorer 7.   The user would click on a button which simply generated an inline JS call to open a pop-up pointing to the .xlsx file on the server. IE would open the pop-up and then shortly thereafter the pop-up would disappear without the file ever opening.   I tweaked the security settings in the users browser...added the site to the list of trusted sites and lowered the security settings to Medium-Low. This allowed IE to at least prompt with the Save or Open message. Clicking either open resulted in "Internet Explorer Could Not Open the Site...".   Perturbed, I retreated back to Geek Central (aka my desk) and modified my application such that instead of simply pointing the browser to the file and now used Response.TransmitFile() to stream it to the browser instead. I thought to myself "this is perfect, it has to work!!!". Alas, no luck.   Bewildered and confused and returned to the lone users computer and started looking around the various IE options. I stumbled upon "Clear SSL State" under the "Content" tab. This appears to clear out all SSL certificates on the client forcing it to refresh. Doing this in concert with resetting the security levels for all zones back to their defaults seemed to do the trick.

    Read the article

  • Error while starting web application.

    - by Lalit
    0 When you right-click a Web site in the Microsoft Internet Information Services (IIS) Microsoft Management Console (MMC) snap-in, and then you click Start, the Web site does not start and you receive the following error message: The process cannot access the file because it is being used by another process. What have to do. To resolve this issue i got this solution form link http://support.microsoft.com/kb/890015 As: You must use the Netstat.exe utility at the command line to see if another process is using port 80 or port 443. But how to ensure that is these Ip are in use or not ? in terms of status ? What should its status ? Second solution is : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\ListenOnlyList. But this key is not found .

    Read the article

  • Double vs Single Quotes in Chrome

    - by Rodrigo
    So when you want to embed google docs on a site you are given this chunk of code: <iframe width='500' height='300' frameborder='0' src='https://docs.google.com/spreadsheet/pub?hl=en_US&hl=en_US&key=0AiV6Vq32hBZIdHZRN3EwWERLZHVUT25ST01LTGxubWc&output=html&widget=true'></iframe> This works fine on my site. If you edit the page, we run the new content through some filters to escape out stuff and make sure it is valid html. After the process, the link above gets converted to this: <iframe frameborder="0" height="300" src="https://docs.google.com/spreadsheet/pub?hl=en_US&amp;hl=en_US&amp;key=0AiV6Vq32hBZIdHZRN3EwWERLZHVUT25ST01LTGxubWc&amp;output=html&amp;widget=true" width="500"></iframe> This will work on every browser except for chrome. Chrome thinks I am running JS in the src. I narrowed it down to a combination of double quotes and escaped '&' symbols. If i revert one of those back to the original state, the iframe works. I work in ruby where ' and " have different behaviors. Is Chrome doing the same thing? Is there a way to turn that off?

    Read the article

  • Idenifying the Ipaddress of the Folders in the BLuehost server…

    - by Aruna
    Hi, we have hosted our site in Bluehost server.We are having 2 websites running by bluehost server. In our bluehost server-file manager we have 2 separate folders namely abc,xyz which is pointing to the site abc.com and xyz.com . I dont know how to find the Ipaddress of those folders. Note: We faced some prblms in abc.com and we have redirected abc.com to xyz.com. I am trying to find the IP address of abc.com and xyz.com .. How to find so in the bluehost server.

    Read the article

  • Nagios3 check_httpname gives 503 response; from command line I get a 200 response

    - by Michael T. Smith
    We're using Nagios to monitor our site (and a bunch of other stuff.) For some odd reason, when I test out the command /usr/lib/nagios/plugins/check_http -H 'domainname.com' the response that comes back is HTTP/1.1 200 OK but when I set up the service to do it: # Check that domain is running define service { hostgroup_name hostgroup service_description host site check_command check_httpname!domainname.com use generic-service notification_interval 1; set > 0 if you want to be renotified } the response that comes back is HTTP/1.1 503 Service Unavailable. Does anyone know why this would be happening?

    Read the article

  • SBS 2011 on different subnet than domain computers

    - by Ravi
    The setup is as follows: SBS 2011 in datacentre on subnet A Domain PCs at another location on subnet B There is a site-to-site VPN. The domain PCs have joined the domain and have the SBS as their primary DNS server. The domain PCs can ping the DC but the problem is that the DC cannot ping any of the remote subnet (subnet B) SBS --Switch -- Router A ------------------- Router B -- Switch -- Domain PCs What is strange is that router A can ping any host on the subnet B. Another host on Subnet A can also ping any host on subnet B. It's only the DC which cannot ping anything to that specific remote subnet B. I did a tracert from the SBS to router B. The packet reaches Router A from the SBS but then it fails. Am I missing some specific settings that needs to be done when SBS is on a different subnet than its member pcs ?

    Read the article

  • Squeezing all the SEO out of a URL as possible.

    - by John Isaacks
    I am working on an ecommerce site, I told our SEO consultant that I plan to make the URL scheme: /products/<id>/<name>. This is similar to Stackoverflow's URLs which are /questions/<id>/<title>. He asked me if I could change the URL scheme to /p/<id>/<name> instead. I know why he wants this change, the word "products" isn't needed to find the correct product, and it doesn't offer any SEO, so shortening it to just p would make the relevant keywords in the <name> weigh more. His main priority is maximizing SEO, but the part that I don't think he is considering is how this effects the semantics of the site. Also having the word "products" looks like it has meaning and a reason for being there, just having a p looks chaotic and ugly to me. I also don't think it makes that much of a difference does it? Stackoverflow doesn't use /q/<id>/<title> and they do just fine, I do realize that theres many factors at play here though, not just the URL. So I want some outside opinions on which is the better way and why?

    Read the article

  • Wordnik Accelerator

    - by prabhpreet
    Wow, creating IE Accelerators is superbly easy. If you want to learn how to create one, go here (some MSDN blog) and the MSDN documentation (clearly written). I was fed up of dictionary.com bringing all those popups and the stupid definitions of Google's dictionary. So I decided to scratch my own itch. I randomly stumbled on the site called Wordnik and it provides with all examples plus definitions plus lots more for words and its popup-free (as far as I know). So I decided to write and accelerator. Here is the source code (Yes, this is it): <?xml version="1.0" encoding="utf-8"?> <os:openServiceDescription xmlns:os="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.wordnik.com</os:homepageUrl> <os:display> <os:name>View on Wordnik</os:name> <os:description>Looking up words on an awesome word site called Wordnik </os:description> <os:icon>http://www.wordnik.com/favicon.ico</os:icon> </os:display> <os:activity category="Define"> <os:activityAction context="selection"> <os:execute method="get" action="http://www.wordnik.com/words/{selection}" ></os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> That’s it. To get it, go here. Enjoy!

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Difference between two kinds of Bing URL Referers

    - by joshuahedlund
    Most of the referral URLS that I get from Bing have the following syntax: http://www.bing.com/search?q=keywords+keywords&[some other variables] However I just noticed that maybe 10-20% of them are coming in like this: http://www.bing.com/url?source=search&[some other variables]&url=http%3A%2F%2Fwww.example.com/user-landing-page-on-my-site&yrktarget=_top&q=keywords+keywords&[some other variables] The first syntax gives me the keywords the user typed in, but the second actually gives me the keywords the user typed in and their landing page on my site. I was originally unaware of this second kind altogether because I have a customized referral report that filters out URLs containing my domain. But now that I noticed them I want to know why they occur to see if I can get more to occur this way because the second syntax contains more valuable information. If I go to one of the first URLs, it gives me a typical Bing query page. The second URLs seem to just redirect me to the Bing home page. I'm not sure if it has to do with the kind of search being performed (I also get a few http://www.bing.com/shopping/search?q= referers) or some other metric. Does anyone know what causes some referral URLs from Bing to have the /search?q syntax and others to have the /url?source syntax? P.S. I have verified that I am getting both kinds of URLs from non-advertising clicks. P.P.S. I am not talking about data in Google Analytics or similar software but the raw $_SERVER['HTTP_REFERER'] value coming from the client's original request.

    Read the article

  • Retexturing a model via API on the web

    - by AndyMcKenna
    I'm looking at creating a site where a user could see our product and configure the options or look of it and see an image that represents that. The way I'm doing it now is if you have Piece A selected and then you choose Texture X, I have an image on the filesystem that is A with X applied to it. I just swap out my default image with that specific one. One product has 8 areas, with 10-70 pieces per area, and about 200 textures for each piece. Programming the site was pretty simple but we are getting bogged down in rendering all these pieces/textures and entering them into the system. What I would love to do is build a model and have some way to apply the textures via API and render it to the browser. I would even settle for exporting a flat image and pulling that into the browser. Is that possible with something like SolidWorks, 3DSMax, or something else? If the rendering is too time intensive it would still help to batch create all my images and use them the way I am currently.

    Read the article

  • Nginx Rate Limiting by Referrer?

    - by SteveEdson
    I've successfully set up rate limiting on IP addresses like so, limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; But I was wondering if its possible to do the same on referrers? For example, if a site gets placed in an iframe on a third party site, which generates too much traffic to handle. I can't find any nginx variables for the referrer anywhere. Is this possible? Or can the solution be achieved in a different way? Thanks.

    Read the article

  • What's wrong with this turn to face algorithm?

    - by Chan
    I implement a torpedo object that chases a rotating planet. Specifically, it will turn toward the planet each update. Initially my implement was: void move() { vector3<float> to_target = target - get_position(); to_target.normalize(); position += (to_target * speed); } which works perfectly for torpedo that is a solid sphere. Now my torpedo is actually a model, which has a forward vector, so using this method looks odd because it doesn't actually turn toward but jump toward. So I revised it a bit to get, double get_rotation_angle(vector3<float> u, vector3<float> v) const { u.normalize(); v.normalize(); double cosine_theta = u.dot(v); // domain of arccosine is [-1, 1] if (cosine_theta > 1) { cosine_theta = 1; } if (cosine_theta < -1) { cosine_theta = -1; } return math3d::to_degree(acos(cosine_theta)); } vector3<float> get_rotation_axis(vector3<float> u, vector3<float> v) const { u.normalize(); v.normalize(); // fix linear case if (u == v || u == -v) { v[0] += 0.05; v[1] += 0.0; v[2] += 0.05; v.normalize(); } vector3<float> axis = u.cross(v); return axis.normal(); } void turn_to_face() { vector3<float> to_target = (target - position); vector3<float> axis = get_rotation_axis(get_forward(), to_target); double angle = get_rotation_angle(get_forward(), to_target); double distance = math3d::distance(position, target); gl_matrix_mode(GL_MODELVIEW); gl_push_matrix(); { gl_load_identity(); gl_translate_f(position.get_x(), position.get_y(), position.get_z()); gl_rotate_f(angle, axis.get_x(), axis.get_y(), axis.get_z()); gl_get_float_v(GL_MODELVIEW_MATRIX, OM); } gl_pop_matrix(); move(); } void move() { vector3<float> to_target = target - get_position(); to_target.normalize(); position += (get_forward() * speed); } The logic is simple, I find the rotation axis by cross product, the angle to rotate by dot product, then turn toward the target position each update. Unfortunately, it looks extremely odds since the rotation happens too fast that it always turns back and forth. The forward vector for torpedo is from the ModelView matrix, the third column A: MODELVIEW MATRIX -------------------------------------------------- R U A T -------------------------------------------------- 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 -------------------------------------------------- Any suggestion or idea would be greatly appreciated.

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >