Search Results

Search found 29938 results on 1198 pages for 'version hunter'.

Page 455/1198 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Is it legal to have different SOAP namespaces/versions between the request and response?

    - by Lord Torgamus
    THIRD EDIT: I now believe that this problem is due to a SOAP version mismatch (1.1 request, 1.2 response) masquerading as a namespace problem. Is it illegal to mix versions, or just bad style? Am I completely out of luck if I can't change my SOAP version or the service's? SECOND EDIT: Clarified error message, and tried to reduce "tl;dr"-ness. EDIT: [Link deleted, not related] Using soapUI, I'm sending a request that starts with: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" ... and getting a response that starts with: <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" ... I know the service is getting the info, because processes down the line are working. However, my soapUI teststep fails. It has two active assertions: "SOAP Response" and "Not SOAP Fault." The failure marker is next to "SOAP Response," with the following message: line -1: Element Envelope@http://www.w3.org/2003/05/soap-envelope is not a valid Envelope@http://schemas.xmlsoap.org/soap/envelope/ document or a valid substitution. I have tried mixing and matching the namespace prefixes and schema URLs. Changing prefixes seems to have no effect; changing URLs causes a VersionMismatch error. I have also tried to use a substitution group, but that doesn't seem to be legal.

    Read the article

  • Reasonable expectation to support new Operating Systems?

    - by Neil N
    My company has a desktop app originally developed for Windows XP. The original programmer has since been fired (fired with extreme prejudice I might add). I have fixed the app various times but overall try to avoid it, it is a mess and the only real way to fix it is to completely rewrite it, which could take a year. We have been trying to "forget" about this app, and instead steer clients towards our web version, which is more up to date, easier to maintain, easier to extend, and WAY easier to support. Most clients agree, the web version is just better all around. However we have one client that insists on using the desktop app. The app required a little duct tape to get working on Vista, but now completely breaks on Windows 7. I'm not even sure WHAT all the fixes are to get it working on Win7 (the current time estimate stands at "miracle") but after both installing the RELEASE build, and running the DEBUG build from Visual Studio, the app has errors on nearly every user action, and from what I can see from a high level test run, none of them are related. Since Windows 7 did not exist when this app was developed, is my company really expected to make all the required changes to make it function as "smoothly" as it did on XP?

    Read the article

  • How to suppress/control logging of Wagon-FTP Maven extension?

    - by Vincenzo
    I'm deploying Maven site by FTP, using Wagon-FTP. Works fine, but output is full of FTP connection/authentication details, which effectively expose logins and passwords to everybody (especially if the project is open source and its CI protocols are publicly accessible): [...] [INFO] [INFO] --- maven-site-plugin:3.0-beta-3:deploy (default-deploy) @ rempl --- Reply received: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- 220-You are user number 1 of 50 allowed. 220-Local time is now 09:08. Server port: 21. 220 You will be disconnected after 15 minutes of inactivity. Command sent: USER **** Reply received: 331 User **** OK. Password required Command sent: PASS ******** Reply received: 230-User **** has group access to: *** 230 OK. Current restricted directory is / [...] Is it possible to suppress this logging? Or configure it... This is a section of my pom.xml, where Wagon-FTP is used: [...] <build> <extensions> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ftp</artifactId> <version>1.0-beta-7</version> </extension> </extensions> [...] </build> [...]

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • Is there a significant mechanical difference between these faux simulations of default parameters?

    - by ccomet
    C#4.0 introduced a very fancy and useful thing by allowing default parameters in methods. But C#3.0 doesn't. So if I want to simulate "default parameters", I have to create two of that method, one with those arguments and one without those arguments. There are two ways I could do this. Version A - Call the other method public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return CutBetween(str, left, right, false); } Version B - Copy the method body public string CutBetween(string str, string left, string right, bool inclusive) { return str.CutAfter(left, inclusive).CutBefore(right, inclusive); } public string CutBetween(string str, string left, string right) { return str.CutAfter(left, false).CutBefore(right, false); } Is there any real difference between these? This isn't a question about optimization or resource usage or anything (though part of it is my general goal of remaining consistent), I don't even think there is any significant effect in picking one method or the other, but I find it wiser to ask about these things than perchance faultily assume.

    Read the article

  • How do I include the proper XML / HTML definitions in a file generated by XSL?

    - by Colen
    As I understand it, you need to include the following code at the top of your HTML files to make sure they're parsed properly: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> ... I'm generating an html file by transforming an XML file using an XSL file. This is going to be done using the MSXML tool, which produces a standard HTML file as output. If I just do this: <xsl:template match="/"> <html> ... Everything is fine. But if I do this: <xsl:template match="/"> <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> ... I get the error XML Parsing Error: XML or text declaration not at start of entity in Firefox, or Cannot have a DOCTYPE declaration outside of a prolog. in IE. Presumably this is because the parser is finding two How do I make the browser a) understand that I am using proper strict HTML, and b) make sure those declarations are put into the HTML output file that MSXML generates?

    Read the article

  • NSFetchedResultsController is driving me crazy

    - by user267980
    Hi everyone. i've been building an app since 1 month using NSFetchedResultsController and i was testing the app on the 3.1.2 SDK. The poblem is that i've been using NSFetchedResultsController everywhere in my app and was working on the 3.1.2 version of the SDK, now my client say that i should make it compatible with the 3.0 version and the deadline is almost there. But is crashing everytime i change an object handled by the contoller, the application is crashing with very weird errors. The problem occure when removing the last object in a section and when a change make an object love to another section. I've been using a sample code from "More iPhone 3 Development Tackling iPhone SDK 3" by Dave Mark and Jeff LaMarche. I've also included some changes from link text Here is a sample output of the console when the application is crashing. * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of sections. The number of sections contained in the table view after the update (1) must be equal to the number of sections contained in the table view before the update (2), plus or minus the number of sections inserted or deleted (2 inserted, 0 deleted).' 2010-03-14 16:23:29.758 Instaproofs[5879:207] Stack: ( 807902715, 7364425, 807986683, 811271572, 815059090, 815007323, 211023, 4363331, 810589786, 807635429, 810579728, 3620573, 3620227, 3614682, 3609719, 27337, 810595174, 807686849, 807683624, 839142449, 839142646, 814752238 ) If i knew that NSFetchedResultsController is so buggy, i would never used it. So basicaly i need the my NSFetchedResultsControllerDelegate to work fine on the 3.0 and above SDKs. It would be life saver if someone help me figure out what i'm doing wrong.

    Read the article

  • Difference in F# and Clojure when calling redefined functions

    - by Michiel Borkent
    In F#: > let f x = x + 2;; val f : int -> int > let g x = f x;; val g : int -> int > g 10;; val it : int = 12 > let f x = x + 3;; val f : int -> int > g 10;; val it : int = 12 In Clojure: 1:1 user=> (defn f [x] (+ x 2)) #'user/f 1:2 user=> (defn g [x] (f x)) #'user/g 1:3 user=> (g 10) 12 1:4 user=> (defn f [x] (+ x 3)) #'user/f 1:5 user=> (g 10) 13 Note that in Clojure the most recent version of f gets called in the last line. In F# however still the old version of f is called. Why is this and how does this work?

    Read the article

  • sending mail in rails from tutorial - outdated?

    - by Stacia
    I'm trying to use this tutorial (http://www.tutorialspoint.com/ruby-on-rails-2.1/rails-send-emails.htm) to send mail on rails, but nothing seems to happen. I have ActionMailer::Base.delivery_method = :sendmail in my environment.rb. It's possible the tutorial may be out of date for the more recent version of rails? If so can someone tell me where it goes wrong or a link to another tutorial. I see this in the log. It's a little supsicious that the "to" has subject and stuff appended to it, but I don't know where it went wrong. Processing EmailerController#sendmail (for 127.0.0.1 at 2010-03-10 19:02:21) [POST] Parameters: {"commit"=>"Send", "authenticity_token"=>"251d315cc4dfc6c58c2c7f6f633d52b101d10c14", "email"=>{"recipient"=>"[email protected]", "subject"=>"booooooo", "message"=>"aabdsaf"}} SQL (0.1ms) SET NAMES 'utf8' SQL (0.1ms) SET SQL_AUTO_IS_NULL=0 Sent mail to subject, booooooo, recipient, [email protected], message, aabdsaf Date: Wed, 10 Mar 2010 19:02:21 -0800 From: [email protected] To: [email protected] Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Hi! You are having one email message from [email protected] with a title This is title and following is the message: Thanks edit: I manually changed "recipient" to my mail address and I still don't get mail, even though the problem in the log with all of the things stuck together is fixed.

    Read the article

  • Did the Unity Team fix that "generics handling" bug back in 2008?

    - by rasx
    At my level of experience with Unity it might be faster to ask whether the "generics handling" bug acknowledged by ctavares back in 2008 was fixed in a public release. Here was the problem (which might be my problem today): Hi, I get an exception when using .... container.RegisterType(typeof(IDictionary<,), typeof(Dictionary<,)); The exception is... "Resolution of the dependency failed, type = \"IDictionary2\", name = \"\". Exception message is: The current build operation (build key Build Key[System.Collections.Generic.Dictionary2[System.String,System.String], null]) failed: The current build operation (build key Build Key[System.Collections.Generic.Dictionary2[System.String,System.String], null]) failed: The type Dictionary2 has multiple constructors of length 2. Unable to disambiguate. When I attempt... IDictionary myExampleDictionary = container.Resolve(); Here was the moderated response: There are no books that'll help, Unity is a little too new for publishers to have caught up yet. Unfortunately, you've run into a bug in our generics handling. This is currently fixed in our internal version, but it'll be a little while before we can get the bits out. In the meantime, as a workaround you could do something like this instead: public class WorkaroundDictionary : Dictionary { public WorkaroundDictionary() { } } container.RegisterType(typeof(IDictionary<,),typeof(WorkaroundDictionary<,)); The WorkaroundDictionary only has the default constructor so it'll inject no problem. Since the rest of your app is written in terms of IDictionary, when we get the fixed version done you can just replace the registration with the real Dictionary class, throw out the workaround, and everything will still just work. Sorry about the bug, it'll be fixed soon!

    Read the article

  • Read XML with PHP

    - by sea_1987
    I am trying to check a field in some XML that is returned from an outside. The XML is returned in a variable call $out and when you view the source of the page you get a XML output of the following, <?xml version="1.0" encoding="UTF-8"?> <ResponseBlock Live="FALSE" Version="3.51"> <Response Type="AUTH"> <OperationResponse> <TransactionReference>23-9-1334895</TransactionReference> <TransactionCompletedTimestamp>2010-04-30 15:59:05</TransactionCompletedTimestamp> <AuthCode>AUTH CODE:TEST</AuthCode> <TransactionVerifier>AlaUOS1MOnN/iwc5s2WPDm5ggrCLwesUnHs9h+W0N3CRaln2W6lh+6dtaRFFhLdwfnw6y7lRemyJUYl9a3dpWfzORE6DaZkFMb+dIb0Ne1UxjFEJkrEtjzx/i8KSayrIBrT/yGZOoOT42EZ9loc+UkdGk/pqYvj8bZztvgBNo2Ak=</TransactionVerifier> <Result>1</Result> <SettleStatus>0</SettleStatus> <SecurityResponseSecurityCode>1</SecurityResponseSecurityCode> <SecurityResponsePostCode>1</SecurityResponsePostCode> <SecurityResponseAddress>1</SecurityResponseAddress> </OperationResponse> <Order> <OrderInformation>This is a test order</OrderInformation> <OrderReference>Order0001</OrderReference> </Order> </Response> </ResponseBlock> I want check what value is in the 'Result' field. I am unsure how to access the information using PHP, so far I have, $xml = simplexml_load_string($out); Many Thanks

    Read the article

  • starting oracle 10g on ubuntu, Listener failed to start.

    - by tsegay
    I have installed oracle 10g on a ubuntu 10.x, This is my first time installation. After installing I tried to start it with the command below. tsegay@server-name:/u01/app/oracle/product/10.2.0/db_1/bin$ lsnrctl LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 29-DEC-2010 22:46:51 Copyright (c) 1991, 2005, Oracle. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> start Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 10.2.0.1.0 - Production System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1))) TNS-12555: TNS:permission denied TNS-12560: TNS:protocol adapter error TNS-00525: Insufficient privilege for operation Linux Error: 1: Operation not permitted Listener failed to start. See the error message(s) above... my listener.ora file looks like this: # listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1)) (ADDRESS = (PROTOCOL = TCP)(HOST = acct-vmserver)(PORT = 1521)) ) ) I can guess the problem is with permission issue, But i dont know where I have to do the change on permission. Any help is appreciated ...

    Read the article

  • non-copyable objects and value initialization: g++ vs msvc

    - by R Samuel Klatchko
    I'm seeing some different behavior between g++ and msvc around value initializing non-copyable objects. Consider a class that is non-copyable: class noncopyable_base { public: noncopyable_base() {} private: noncopyable_base(const noncopyable_base &); noncopyable_base &operator=(const noncopyable_base &); }; class noncopyable : private noncopyable_base { public: noncopyable() : x_(0) {} noncopyable(int x) : x_(x) {} private: int x_; }; and a template that uses value initialization so that the value will get a known value even when the type is POD: template <class T> void doit() { T t = T(); ... } and trying to use those together: doit<noncopyable>(); This works fine on msvc as of VC++ 9.0 but fails on every version of g++ I tested this with (including version 4.5.0) because the copy constructor is private. Two questions: Which behavior is standards compliant? Any suggestion of how to work around this in gcc (and to be clear, changing that to T t; is not an acceptable solution as this breaks POD types). P.S. I see the same problem with boost::noncopyable.

    Read the article

  • MSBuild file for deployment process

    - by Lee Englestone
    I could do with some pointers, code examples or references that may help me do the following in an msbuild file to help speed up the deployment process.. This scenario involves getting a developers 'local' version onto a 'development' server.. Increment a developers local Web Applications Assembly version number Publish a developers local Web Application files somewhere .rar the publsihed files or folder into the format v[IncrementedAssemblyNumber].rar Copy the .rar to somewhere Backup (.rar) the existing live website folder (located elsewhere) in the format Pre_v[IncrementedAssemblyNumber].rar Move the backed up .rar to a /Backup folder. Overwrite the development web files with the published local web files Should be simple for all those MSBUILD Gurus out there. Like I said, answers or 'Good and applicable' links would be much appreciated. Also i'm thinking of getting one of the MSbuild books. From what I can tell there are 2, possibly 3 contenders. I am not using TFS. Can anyone recommend a book for beginning MSBUILD? Ideally from people that have read more than one book on the subject. Cheers, -- Lee

    Read the article

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

  • Removing related elements using XSLT 1.0

    - by pmdarrow
    I'm attempting to remove Component elements from the XML below that have File children with the extension "config." I've managed to do this part, but I also need to remove the matching ComponentRef elements that have the same "Id" values as these Components. <Fragment> <DirectoryRef Id="MyWebsite"> <Component Id="Comp1"> <File Source="Web.config" /> </Component> <Component Id="Comp2"> <File Source="Default.aspx" /> </Component> </DirectoryRef> </Fragment> <Fragment> <ComponentGroup Id="MyWebsite"> <ComponentRef Id="Comp1" /> <ComponentRef Id="Comp2" /> </ComponentGroup> </Fragment> Based on other SO answers, I've come up with the following XSLT to remove these Component elements: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" /> <xsl:template match="Component[File[substring(@Source, string-length(@Source)- string-length('config') + 1) = 'config']]" /> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> Unfortunately, this doesn't remove the matching ComponentRef elements (i.e. those that have the same "Id" values). The XSLT will remove the component with the Id "Comp1" but not the ComponentRef with Id "Comp1". How do I achieve this using XSLT 1.0?

    Read the article

  • How to make TXMLDocument (with the MSXML Implementation) always include the encoding attribute?

    - by Fabricio Araujo
    I have legacy code (I didn't write it) that always included the encoding attribute, but recompiling it to D2010, TXMLDocument doesn't include the enconding anymore. Because the XML data have accented characters both on tags and data, TXMLDocument.LoadFromFile simply throws EDOMParseErros saying that an invalid character is found on the file. Relevant code: Doc := TXMLDocument.Create(nil); try Doc.Active := True; Doc.Encoding := XMLEncoding; RootNode := Doc.CreateElement('Test', ''); Doc.DocumentElement := RootNode; <snip> //Result := Doc.XMl.Text; Doc.SaveToXML(Result); // Both lines gives the same result On older versions of Delphi, the following line is generated: <?xml version="1.0" encoding="ISO-8859-1"?> On D2010, this is generated: <?xml version="1.0"?> If I change manually the line, all works like always worked in the last years. UPDATE: XMLEncoding is a constant and is defined as follow XMLEncoding = 'ISO-8859-1';

    Read the article

  • problem on running script on different operating system

    - by Praveen kalal
    i run use a javascript code for getting browser information it run fine on microsoft windows xp but it not working on microsoft windows server 2003. my code is folowing. plz help. <html> <head> <script type="text/javascript" src="zeroclipboard/ZeroClipboard.js"></script> <script type="text/javascript"> window.onload = function F() { var today = new Date(); var the_date = new Date("December 31, 2012"); var the_cookie_date = the_date.toGMTString(); var the_cookie = screen.width +"x"+ screen.height; var the_cookie = "Screen Resolution:"+the_cookie + ";\nExpires:" + the_cookie_date+";\n Browser CodeName:"+navigator.appCodeName+";\n Browser Name: " + navigator.appName+";\n Browser Version: " + navigator.appVersion+";\n Browser Version: " + navigator.appVersion+"; \n Cookies Enabled: " + navigator.cookieEnabled +";\n Platform: " + navigator.platform+";\n User-agent header: " + navigator.userAgent; / document.getElementById('box-content').value=the_cookie; } </script> </head> <body> <textarea name="box-content" id="box-content" rows="10" cols="70"> </textarea> <br /><br /> <p><input type="button" id="copy" name="copy" value="Copy to Clipboard" /></p> </body> </html> <script type="text/javascript"> //set path ZeroClipboard.setMoviePath('http://192.168.101.135:471/browserinfo/zeroclipboard/ZeroClipboard.swf'); //create client var clip = new ZeroClipboard.Client(); //event clip.addEventListener('mousedown',function() { clip.setText(document.getElementById('box-content').value); }); clip.addEventListener('complete',function(client,text) { alert('text is copied'); }); //glue it to the button clip.glue('copy'); </script>

    Read the article

  • How to loop a video in Flash

    - by james
    So i had a video that was in quicktime format, threw it into flash, encoded it without a problem and here is the result i got: http://www.healthcarepros.net/travel.html I would like the video to "loop" or "autorewind" as soon as it ends but i am having the hardest time trying to figure how to do this. Here is my code, any help would be greatly appreciated... if (AC_FL_RunContent == 0) { alert("This page requires AC_RunActiveContent.js."); } else { AC_FL_RunContent( 'codebase', 'http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0', 'width', '330', 'height', '245', 'src', 'healthcare-video', 'quality', 'high', 'pluginspage', 'http://www.macromedia.com/go/getflashplayer', 'align', 'middle', 'play', 'true', 'loop', 'true', 'scale', 'showall', 'wmode', 'window', 'devicefont', 'false', 'id', 'healthcare-video', 'bgcolor', '#ffffff', 'name', 'healthcare-video', 'menu', 'true', 'allowFullScreen', 'false', 'allowScriptAccess','sameDomain', 'movie', 'healthcare-video', 'salign', '' ); //end AC code } <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="330" height="245" id="healthcare-video" align="middle"> <param name="allowScriptAccess" value="sameDomain" /> <param name="allowFullScreen" value="false" /> <param name="loop" value="true" /> <param name="play" value="true" /> <param name="movie" value="healthcare-video.swf" /><param name="quality" value="high" /><param name="bgcolor" value="#ffffff" /> <embed src="healthcare-video.swf" play="true" flashvars="autoplay=true&play=true&autorewind=true" quality="high" bgcolor="#ffffff" width="330" height="245" name="healthcare-video" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </object>

    Read the article

  • With Google Website Optimizer's multivariate testing, can I vary multiple css classes on a single di

    - by brahn
    I would like to use Google Website Optimizer (GWO)'s multivariate tests to test some different versions of a web page. I can change from version to version just by varying some class tags on a div, i.e. the different versions are of this form: <div id="testing" class="foo1 bar1">content</div> <div id="testing" class="foo1 bar2">content</div> <div id="testing" class="foo2 bar1">content</div> <div id="testing" class="foo2 bar2">content</div> In the ideal, I would be able to use GWO section code in place of each class, and google would just swap in the appropriate tags (foo1 or foo2, bar1 or bar2). However, naively doing this results in horribly malformed code because I would be trying to put <script> tags inside the div's class attribute: <div id="testing" class=" <script>utmx_section("foo-class")</script>foo1</noscript> <script>utmx_section("bar-class")</script>bar1</noscript> "> content </div> And indeed, the browser chokes all over it. My current best approach is just to use a different div for each variable in the test, as follows: <script>utmx_section("foo-class-div")</script> <div class="foo1"> </noscript> <script>utmx_section("bar-class-div")</script> <div class="bar1"> </noscript> content </div> </div> So testing multiple variables requires layer of div-nesting per variable, and it all seems rather awkward. Is there a better approach that I could use in which I just vary the classes on a single div?

    Read the article

  • How to search cvs comment history

    - by Chris Noe
    I am aware of this command: cvs log -N -w<userid> -d"1 day ago" Unfortunately this generates a formatted report with lots of newlines in it, such that the file-path, the file-version, and the comment-text are all on separate lines. Therefore it is difficult to scan it for all occurrences of comment text, (eg, grep), and correlate the matches to file/version. (Note that the log output would be perfectly acceptable, if only cvs could perform the filtering natively.) EDIT: Sample output. A block of text like this is reported for each repository file: RCS file: /data/cvs/dps/build.xml,v Working file: build.xml head: 1.49 branch: locks: strict access list: keyword substitution: kv total revisions: 57; selected revisions: 1 description: ---------------------------- revision 1.48 date: 2008/07/09 17:17:32; author: noec; state: Exp; lines: +2 -2 Fixed src.jar references ---------------------------- revision 1.47 date: 2008/07/03 13:13:14; author: noec; state: Exp; lines: +1 -1 Fixed common-src.jar reference. =============================================================================

    Read the article

  • g++ Linking Error on Mac while compiling FFMPEG

    - by Saptarshi Biswas
    g++ on Snow Leopard is throwing linking errors on the following piece of code test.cpp #include <iostream> using namespace std; #include <libavcodec/avcodec.h> // required headers #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); // offending library call return 0; } When I try to compile this using the following command g++ test.cpp -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I get the error Undefined symbols: "av_register_all()", referenced from: _main in ccUD1ueX.o ld: symbol(s) not found collect2: ld returned 1 exit status Interestingly, if I have an equivalent c code, test.c #include <stdio.h> #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); return 0; } gcc compiles it just fine gcc test.c -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I am using Mac OS X 10.6.5 $ g++ --version i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) FFMPEG's libavcodec, libavformat etc. are C libraries and I have built them on my machine like thus: ./configure --enable-gpl --enable-pthreads --enable-shared \ --disable-doc --enable-libx264 make && sudo make install As one would expect, libavformat indeed contains the symbol av_register_all $ nm /usr/local/lib/libavformat.a | grep av_register_all 0000000000000000 T _av_register_all 00000000000089b0 S _av_register_all.eh I am inclined to believe g++ and gcc have different views of the libraries on my machine. g++ is not able to pick up the right libraries. Any clue?

    Read the article

  • How to minor updates to Drupal-6 with shared hosting

    - by marty.fried
    I've got Drupal working on a shared host, and I uploaded some modules from my home system successfully, but I've got the message that there is a security update for my version, and I should update immediately. I'm not sure how I'm supposed to do that. It seems like the update is an entire new installation. I originally installed it using the hosting company's installer, Fantastico. Should I simply over-write the existing installation with the new files? Or ignore the message? I realize I shouldn't over-write the sites folder, or anything I've modified. The instructions that come with the download seem to be for a major version upgrade, and are way too much trouble for frequent security updates. Searching Drupal's site shows many other methods, but no indication of anything official. And some were ridiculously error-prone, and not really useful. I don't have shell access to the hosting site, although I can pay extra to get it if I really need to. Or, maybe I can clone the site on my local Linux system, do the update using a script, then upload the whole thing. Does anyone have experience with this situation?

    Read the article

  • What should the standard be for ReSTful URLS?

    - by gargantaun
    Since I can't find a chuffing job, I've been reading up on ReST and creating web services. The way I've interpreted it, the future is all about creating a web service for all your data before you build the web app. Which seems like a good idea. However, there seems to be a lot of contradictory thoughts on what the best scheme is for ReSTful URLs. Some people advocate simple pretty urls http://api.myapp.com/resource/1 In addition, some people like to add the API version to the url like so http://api.myapp.com/v1/resource/1 And to make things even more confusing, some people advocate adding the content-type to get requests http://api.myapp.com/v1/resource/1.xml http://api.myapp.com/v1/resource/1.json http://api.myapp.com/v1/resource/1.txt Whereas others think the content-type should be sent in the HTTP header. Soooooooo.... That's a lot of variation, which has left me unsure of what the best URL scheme is. I personally see the merits of the most comprehensive URL that includes a version number, resource locator and content-type, but I'm new to this so I could be wrong. On the other hand, you could argue that you should do "whatever works best for you". But that doesn't really fit with the ReST mentality as far as I can tell since the aim is to have a standard. And since a lot of you people will have more experience than me with ReST, I thought I'd ask for some guidance. So, with all that in mind... What should the standard be for ReSTful URLS?

    Read the article

  • "string" != "string"

    - by Misiur
    Hi. I'm doing some kind of own templates system. I want to change <title>{site('title')}</title> Into function "site" execution with parameter "title". Here's private function replaceFunc($subject) { foreach($this->func as $t) { $args = explode(", ", preg_replace('/\{'.$t.'\(\'([a-zA-Z,]+)\'\)\}/', '$1', $subject)); $subject = preg_replace('/\{'.$t.'\([a-zA-Z,\']+\)\}/', call_user_func_array($t, $args), $subject); } return $subject; } Here's site: function site($what) { global $db; $s = $db->askSingle("SELECT * FROM ".DB_PREFIX."config"); switch($what) { case 'title': return 'Title of page'; break; case 'version': return $s->version; break; case 'themeDir': return 'lolmao'; break; default: return false; } } I've tried to compare $what (which is for this case "title") with "title". MD5 are different. strcmp gives -1, "==", and "===" return false. What is wrong? ($what type is string. You can't change call_user_func_array into call_user_func, because later I'll be using multiple arguments)

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >