Search Results

Search found 11944 results on 478 pages for 'struts2 json plugin'.

Page 423/478 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • How to create mobile substrate plugins on XCode?

    - by prathumca
    Hi Everyone, I just wanna create a MS plugin to hook SpringBoard. I'm following "gojohnnyboi" tutorial from here "http://www.ipodtouchfans.com/forums/showthread.php?t=103558". To create a dylib on XCode, I'm following "SkylarEC" tutorial. I mix these two great tutorials and finally got succeed by getting a dylib. But when I placed the dylib in the /Library/MobileSubstrate/DynamicLibraries/ nothing is happened (no alert was shown). By evaluating, I found that, this dylib doesn't have any starting point when it was loaded into the memory. So I mentioned a starting point by declaring a constructor in the .mm file like, __ attribute__((constructor)) static void init() { Class _$SBAppIcon = objc_getClass("SBApplicationIcon"); MSHookMessage($SBAppIcon, @selector(launch), (IMP) &_$ExampleHook_AppIcon_Launch, "_OriginalMethodPrefix"); } But when I'm trying to compile this, I'm getting an error like, Undefined symbols: "_MSHookMessage", referenced from: init() in ExampleHook.o ld: symbol(s) not found collect2: ld returned 1 exit status. Does anyone has idea how to solve this? It would be great and more helpful if anyone share the detailed tutorial/instructions to create a dylib on XCode. P.S I placed all the libsubstrate.dylib and substrate.h files in the corresponding location. And the locations are, libsubstrate.dylib : /Developer/Platforms/iPhoneOS.platform/Developer/usr/lib/ substrate.h : /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.0.sdk/usr/include and my base SDK is 3.0.

    Read the article

  • MVC2 and jquery.validate.js

    - by Will I Am
    I am experiencing some confusion with jquery.validate.js First of all, what is MicrosoftMvcJqueryValidation.js. It is referenced in snippets on the web but appears to have dissapeared from the RTM MVC2 and is now in futures. Do I need it? The reason I'm asking is that I'm trying to use the validator with MVC and I can't get it to work. I defined my JS as: $(document).ready(function () { $("#myForm").validate({ rules: { f_fullname: { required: true }, f_email: { required: true, email: true }, f_email_c: { equalTo: "#f_email" }, f_password: { required: true, minlength: 6 }, f_password_c: { equalTo: "#f_password" } }, messages: { f_fullname: { required: "* required" }, f_email: { required: "* required", email: "* not a valid email address" }, f_email_c: { equalTo: "* does not match the other email" }, f_password: { required: "* required", minlength: "password must be at least 6 characters long" }, f_password_c: { equalTo: "* does not match the other email" } } }); }); and my form on the view: <% using (Html.BeginForm("CreateNew", "Account", FormMethod.Post, new { id = "myForm" })) { %> <fieldset> <label for="f_fullname">name:</label><input id="f_fullname"/> <label for="f_email"><input id="f_email"/> ...etc... <input type="submit" value="Create" id="f_submit"/> </fieldset> <% } %> and the validation method gets called on .ready() with no errors in firebug. however when I submit the form, nothing gets validated and the form gets submitted. If I create a submitHandler() it gets called, but the plugin doesn't detect any validation errors (.valid() == true) What am I missing?

    Read the article

  • How to obtain dependency metrics from Java source code?

    - by Bram Schoenmakers
    For an assignment we have to extract some software metrics from the Hibernate project. We have to extract the afferent coupling and efferent coupling metrics (dependency fan-in, fan-out) from each revision of each package in Hibernate. Some tools were provided which are able to extract these metrics, such as ckjm and JDepend. Other tools I have checked were Sonar, javancss and AOP. There is also the Metrics Eclipse plugin which I didn't get to work either. What these tools have in common, as far as I can see, is that they all operate on bytecode (*.class files). This is a problem, because I have to build every revision from source in order to run, say, JDepend on it. Older revisions won't build because my development stack is too recent. What I would like to do is to do this kind of analysis on source files so that I don't have to build each revision. Is this possible? Or is there a good reason why all these tools only operate on bytecode?

    Read the article

  • jQuery fn.extend ({bla: function(){}} vs. jQuery.fn.bla

    - by tixrus
    OK I think I get http://stackoverflow.com/questions/1991126/difference-jquery-extend-and-jquery-fn-extend in that the general extend can extend any object, and that fn.extend is for plugin functions that can be invoked straight off the jquery object with some internal jquery voodoo. So it appears one would invoke them differently. If you use general extend to extend object obj by adding function y, then the method would attach to that object, obj.y() but if you use fn.extend then they are attach straight to the jquery object $.y().... Have I got that correct yes or no and if no what do I have wrong in my understanding? Now MY question: The book I am reading advocates using jQuery.fn.extend ({a: function(){}, b: function(){}}); syntax but in the docs it says jQuery.fn.a (function(){}); and I guess if you wanted b as well it would be jQuery.fn.b (function(){}); Are these functionally and performance-wise equivalent and if not what is the difference? Thank you very much. I am digging jQuery!

    Read the article

  • How to produce an HTTP 403-equivalent WCF Message from an IErrorHandler?

    - by Andras Zoltan
    I want to write an IErrorHandler implementation that will handle AuthenticationException instances (a proprietary type), and then in the implementation of ProvideFault provide a traditional Http Response with a status code of 403 as the fault message. So far I have my first best guess wired into a service, but WCF appears to be ignoring the output message completely, even though the error handler is being called. At the moment, the code looks like this: public class AuthWeb403ErrorHandler : IErrorHandler { #region IErrorHandler Members public bool HandleError(Exception error) { return error is AuthenticationException; } public void ProvideFault(Exception error, MessageVersion version, ref Message fault) { //first attempt - just a stab in the dark, really HttpResponseMessageProperty property = new HttpResponseMessageProperty(); property.SuppressEntityBody = true; property.StatusCode = System.Net.HttpStatusCode.Forbidden; property.StatusDescription = "Forbidden"; var m = Message.CreateMessage(version, null); m.Properties[HttpResponseMessageProperty.Name] = property; fault = m; } #endregion } With this in place, I just get the standard WCF html 'The server encountered an error processing the request. See server logs for more details.' - which is what would happen if there was no IErrorHandler. Is this a feature of the behaviours added by WebServiceHost? Or is it because the message I'm building is simply wrong!? I can verify that the event log is indeed not receiving anything. My current test environment is a WebGet method (both XML and Json) hosted in a service that is created with the WebServiceHostFactory, and Asp.Net compatibility switched off. The service method simply throws the exception in question.

    Read the article

  • Issues with cross-domain uploading

    - by meder
    I'm using a django plugin called django-filebrowser which utilizes uploadify. The issue I'm having is that I'm hosting uploadify.swf on a remote static media server, whereas my admin area is on my django server. At first, the browse button wouldn't invoke my browser's upload. I fixed this by modifying the sameScriptAccess to always instead of sameDomain. Now the progress bar doesn't move at all, I probably have to enable some server setting for cross domain file uploading, or most likely actually host a separate upload script on my media server. I thought I could solve this by adding a crossdomain.xml to enable any site at the root of both servers, but that doesn't seem to solve it. $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : 'http://media.site.com:8080/admin/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': '...'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : 'http://media.site.com:8080/admin/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'always', //'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); The page I'm viewing this on is http://site.com/admin/filebrowser/browse on port 80.

    Read the article

  • How can I debug why this click handler never fires?

    - by tixrus
    I am going to be excrutiatingly detailed here. I am using Firefox 3.6.3 on Max OSX with Firebug 1.5.3. I have two versions of a project, one which works and one with a bug. One I downloaded and one I typed by hand. Take a guess which one doesn't work. They should be the same except that mine uses a newer version of jQuery and the files are named differently. jQuery version is not the issue. I made mine use the older jquery and I made the working one use the newer jquery. Either way, mine still broke and the downloaded one still works. I've busted my eyes trying to see how these projects are different. The only thing I don't want to do is copy the working code to the busted code because I need to be able to figure this stuff out when it is my own unique code causing similar issues. There are no errors that I can see in Firebug in my code, in fact, 2/3 of it works just fine. just the second button does nothing. So I wanted to step through. These are always eyeball errors and I really suck at seeing them. I put it on a public server. http://colleenweb.com/jqtests/ex71.html And I want to debug ex71.js If you firebug the working one and set a break point at line 13 in ex71.js the variable json has the expected values when you click on the second button. But If you firebug this one, it never gets there. I've been over the html and all the names of everything seem to match up. I also wonder why the buttons aren't right justified but that's a css thing. Please tell me what I'm missing, and more importantly, what tool/technique I could use to find these types of bugs.

    Read the article

  • Getting error detail from WCF REST

    - by Keith
    I have a REST service consumed by a .Net WCF client. When an error is encountered the REST service returns an HTTP 400 Bad Request with the response body containing JSON serialised details. If I execute the request using Fiddler, Javascript or directly from C# I can easily access the response body when an error occurs. However, I'm using a WCF ChannelFactory with 6 quite complex interfaces. The exception thrown by this proxy is always a ProtocolException, with no useful details. Is there any way to get the response body when I get this error? Update I realise that there are a load of different ways to do this using .Net and that there are other ways to get the error response. They're useful to know but don't answer this question. The REST services we're using will change and when they do the complex interfaces get updated. Using the ChannelFactory with the new interfaces means that we'll get compile time (rather than run time) exceptions and make these a lot easier to maintain and update the code. Is there any way to get the response body for an error HTTP status when using WCF Channels?

    Read the article

  • How to resize a Flot graph when its containing div changes size

    - by Will Gorman
    I'm using the Flot graphing library jQuery plugin and I haven't found a good way to handle resizing the graph when it's containing <div> changes size (for example, due to window resizing). When handling the onresize event, I've made sure that the width and height of the containing <div>are updated to the correct size and then tried calling both setupGrid and draw on the plot object but with no effect. I've had some success with the approach of just removing and readding the containing <div> and replotting the graph in it. However, this seems to be prone to getting stuck in infinite resize event loops if I have to add other <div> elements to the document at the same time (like for tooltips for the graph) as I'm guessing those can trigger resize events as well? Is there a good way to handle it that I'm missing? (I'm also using ExplorerCanvas for IE in order to be able to use Flot, if that might have anything to do with it. I haven't really tried in any other browsers yet)

    Read the article

  • NoSuchMethod exception thrown in GWT

    - by eb1
    I'm starting to get my feet wet in the latest Google Web Toolkit using the Eclipse plugin on OS X 10.5.8. So far I've been able to get the client up and running, which is great. The server, though, is a different matter. I'm trying to link to a .jar file that has some classes I want to use in a server ServiceImpl class, but it seems to have glommed onto a previous iteration of the .jar - I've added a method, rebuilt the jar, removed the jar from the libraries tab on the GWT project's build path (as well as on the exports) and reincluded the jar. No luck - I'm still getting: [WARN] Exception while dispatching incoming RPC call com.google.gwt.user.server.rpc.UnexpectedException: Service method 'public abstract org.gwtapplication.client.LWDocument org.gwtapplication.client.DocumentService.getDocument()' threw an unexpected exception: java.lang.NoSuchMethodError: org.externalmodel.MyReallyValidClass.toSomething()Ljava/lang/String; at com.google.gwt.user.server.rpc.RPC.encodeResponseForFailure(RPC.java:378) at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:581) ... Caused by: java.lang.NoSuchMethodError: org.externalmodel.MyReallyValidClass.toSomething()Ljava/lang/String; at org.application.server.DocumentServiceImpl.getDocument(DocumentServiceImpl.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) Eclipse's code sense has no problems resolving the MyReallyValidClass.toSomething() call, and there are no errors with other calls into the externalmodel classes. Any clue where I should be looking?

    Read the article

  • Google Map GEO Results

    - by Lee
    Hey All I'm getting really frustrated with google geo results and hope someone can advise me the best was to go. I have created a AutoSuggest feature where you can start typing the address and google will repspond with suggestions. User then selects and address to move on. But before I want them to continue on the next page I want to validate their selection. I would have thought this will be easy as we are only checking against what google has already given. But when I do my validation lookup it displays no results. Some example code: Lets say I picked from the suggestion this address: Suffield, CT 06078, USA Then on validation I do a second lookup with this address ie. $string = "Suffield, CT 06078, USA"; echo 'http://maps.google.com/maps/geo?output=json&oe=utf8&gl=us&sensor=false&key=[MyKey]&q='.urlencode($string).''; It gives me Error code 602 (G_GEO_UNKNOWN_ADDRESS) How can it not be found when its given me the address ?? Any suggestions how I can get around this. Hope you can !

    Read the article

  • NSIS takes ownership of IIS system files

    - by Lucas
    I recently encountered an issue with NSIS that I believe is related to an interaction with UAC, but I am at a loss to explain it and I do not know how to prevent it in the future. I have an installer that creates and removes IIS virtual directories using the NsisIIS plugin. The installer appeared worked correctly on my Windows 7 workstation. When the installer was run on a Windows 2008 R2 server it installed properly, but the uninstaller removed all of the virtual directories and put IIS is an unusable state; to the point that I had to remove the Default Web Site and re-add it. What I eventually found was that all of the IIS configuration files under C:\Windows\System32\inetsrv\config had a lock icon on them. Some investigation seem to indicate that this means a user account has taken ownership of the file, however all the files listed SYSTEM as the file owner. I did check a different server that I have not run the installer on, and it does not have the lock icon applied to the IIS files. I have also seen the same lock icon appear on other files that the NSIS installer creates. For instance, I have a Web.Config.tpl file that is processed using the NSIS ReplaceInFile which also appears with the lock icon after the installer finished. After I explicitly grant another user account access to the file, the lock icon goes away. I run the installer under the local Administrator account on the 2008 R2 server, so I do not get the UAC prompt. Here is the relevant code from the install.nsi file RequestExecutionLevel admin Section "Application" APP_SECTION SectionIn RO Call InstallApp SectionEnd Section "un.Uninstaller Section" Delete "$PROGRAMFILES\${PROGRAMFILESDIR}\Uninstall.exe" Call un.InstallApp SectionEnd Function InstallApp File /oname=Web.Config Web.Config.tpl !insertmacro ReplaceInFile Web.Config %CONNECTION_STRING% $CONNECTION_STRING FunctionEnd Function un.InstallApp ReadRegStr $0 HKLM "Software\${REGKEY}" "VirtualDir" NsisIIS::DeleteVDir "$0" Pop $0 FunctionEnd I have three questions stemming from this incident: How did this happen? How can I fix my installer to prevent it from happening again? How can I repair the permissions on the IIS config files.

    Read the article

  • jquery ui autocomplete problem

    - by Roger
    Hi, i've got a select box containing countries, and when one is selected, i want my autocomplete data for the city field to load via ajax. here's my code: // Sets up the autocompleter depending on the currently // selected country $(document).ready(function() { var cache = getCities(); $('#registration_city_id').autocomplete( { source: cache } ); cache = getCities(); // update the cache array when a different country is selected $("#registration_country_id").change(function() { cache = getCities(); }); }); /** * Gets the cities associated with the currently selected country */ function getCities() { var cityId = $("#registration_country_id :selected").attr('value'); return $.getJSON("/ajax/cities/" + cityId + ".html"); } This returns the following json: ["Aberdare","Aberdeen","Aberystwyth","Abingdon","Accrington","Airdrie","Aldershot","Alfreton","Alloa","Altrincham","Amersham","Andover","Antrim","Arbroath","Ardrossan","Arnold","Ashford","Ashington","Ashton-under-Lyne","Atherton","Aylesbury","Ayr",... ] But, it doesn't work. When i start typing in the city box, the style changes so the autocompleter is doing something, but it won't display this data. If i hard-code the above it works. Can anyone see what's wrong? Thanks

    Read the article

  • Dreamweaver and GZIP files

    - by Vian Esterhuizen
    Hi, I've recently tried to optimize my site for speed and brandwith. Amongst many other techniques, I've used GZIP on my .css and .js files. Using PuTTY I compressed the files on my site and then used: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{HTTP_USER_AGENT} !Konqueror RewriteCond %{REQUEST_FILENAME}.gz -f RewriteRule ^(.*)\.css$ $1.css.gz [QSA,L] RewriteRule ^(.*)\.js$ $1.js.gz [QSA,L] <FilesMatch \.css\.gz$> ForceType text/css </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript </FilesMatch> </IfModule> <IfModule mod_mime.c> AddEncoding gzip .gz </IfModule> in my .htaccess file so that they get served properly because all my links are without the ".gz". My problem is, I cant work on the GZIP file in Dreamweaver. Is there a plugin or extension of somesort that allows Dreamweaver to temporarily uncompress thses files so it can read them? Or is there a way that I can work on my local copies as regular files, and server side they automatically get compressed when they are uploaded. Or is there a different code editor I should be using that would completely get around this? Or a just a different technique to doing this? I hope this question makes sense, Thanks

    Read the article

  • Use VersionControlExt.Explorer outside Visual Studio

    - by Ian
    Hi All, I'm developing a TFS tool to assist the developers in our company. This said tool needs to be able to "browse" the TFS server like in the Source Control Explorer. I believe that by using VersionControlExt.Explorer.SelectedItems, a UI will pop-up that will enable the user to browse the TFS server (please correct me if I'm wrong). However, VersionControlExt is only accessible when developing inside Visual Studio (aka Plugin). Unfortunately, I am developing a Windows Application that won;t run inside VS. So the question is, Can I use VersionControlExt outside of Visual Studio? If yes, how? Here's an attempt on using the Changset Details Dialog outside of Visual Studio string path = System.IO.Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location); Assembly vcControls = Assembly.LoadFile(path + @"\Microsoft.TeamFoundation.VersionControl.Controls.dll"); Assembly vcClient = Assembly.LoadFile(path + @"\Microsoft.TeamFoundation.VersionControl.Client.dll"); Type dialogChangesetDetailsType = vcControls.GetType("Microsoft.TeamFoundation.VersionControl.Controls.DialogChangesetDetails",true); Type[] ctorTypes = new Type[3] {vcClient.GetType("Microsoft.TeamFoundation.VersionControl.Client.VersionControlSever"), vcClient.GetType("Microsoft.TeamFoundation.VersionControl.Client.Changeset"), typeof(System.Boolean)}; ConstructorInfo ctorInfo = dialogChangesetDetailsType.GetConstructor(ctorTypes); Object[] ctorObjects = new Object[3] {VersionControlHelper.CurrentVersionControlServer, uc.ChangeSet, true}; Object oDialog = ctorInfo.Invoke(ctorObjects); dialogChangesetDetailsType.InvokeMember("ShowDialog", BindingFlags.InvokeMethod, null, oDialog, null);

    Read the article

  • How do I dynamically create a document for download in Javascript?

    - by Nelson
    I'm writing some Javascript code that generates an XML document in the client (via Google Earth plugin). I'd like the user to be able to click a button on the page and be prompted to save that XML to a new file. If I were generating the XML server-side this would be easy, just make the button open the link. But the XML is generated client-side. I've come up with a couple of hacks that half-work, inspired in part by this StackOverflow question. But neither completely work. Here's a demo HTML with embedded code: <html><head><script> function getData() { return '<?xml version="1.0" encoding="UTF-8"?><doc>Hello</doc>'; } function dlDataURI() { window.open("data:text/xml;charset=utf-8," + getData()); } function dlWindow() { var w = window.open(); w.document.open(); w.document.write(getData()); w.document.close(); } </script><body> <div onclick="dlDataURI()">Click for Data URL</div> <div onclick="dlWindow()">Click for Window</div> </body></html> The dlDataURI() version works great in Firefox, poorly in Chrome (can't save), and not at all in IE. The Window() version works OK in Firefox and IE, and not well in Chrome (can't save, XML embedded inside HTML). Neither version ever prompts a user download, it always opens a new window trying to display the XML. Is there a good way to do what I want in client side Javascript? I'd like this to work in today's browsers, ideally Firefox, MSIE 8, and Chrome.

    Read the article

  • Twitter-OAuth update_profile_*_image methods problem [EpiTwitter]

    - by KPL
    People, I have been struggling over the two methods - Update Profile Image and Update Background Image I am using EpiTwitter library. I am uploading GIFs, Twitter returns the expected result for update_profile_background_image but returns 401 for update_profile_image , but the image is not changed. Here are the headers catched from $apiObj-headers in my case while using the update_profile_background_image Array ( [Date] = Sat, 24 Apr 2010 17:51:36 GMT [Server] = hi [Status] = 200 OK [X-Transaction] = 1272131495-55190-23911 [ETag] = b6a421c01936f3547802ae6b59ee7ef3" [Last-Modified] = Sat, 24 Apr 2010 17:51:36 GMT [X-Runtime] = 0.13990 [Content-Type] = application/json; charset=utf-8 [Content-Length] = 1272 [Pragma] = no-cache [X-Revision] = DEV [Expires] = Tue, 31 Mar 1981 05:00:00 GMT [Cache-Control] = no-cache, no-store, must-revalidate, pre- check=0, post-check=0 [Set-Cookie] = *REMOVED* [Vary] = Accept-Encoding [Connection] = close ) and for update_profile_image - Array ( [Date] => Sat, 24 Apr 2010 17:57:58 GMT [Server] => hi [Status] => 401 Unauthorized [WWW-Authenticate] => Basic realm="Twitter API" [X-Runtime] => 0.02263 [Content-Type] => text/html; charset=utf-8 [Content-Length] => 152 [Cache-Control] => no-cache, max-age=300 [Set-Cookie] => *REMOVED* [Expires] => Sat, 24 Apr 2010 18:02:58 GMT [Vary] => Accept-Encoding [Connection] => close ) Can somebody help me out?

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • How can I stop rails validating xml?

    - by Andrei T. Ursan
    I'm submitting to a rails webservice the following message: xmlPostData = "<message> <message-text>" + MESSAGE_WITH_XML + "</message-text> <name>" + subject + "</name> <f1>" + toPhone + "</f1> <f2>" + fromPhone + "</f2> </message>"; The problem is the the field with contain a text with XML data, is a workaround but I need to be able to submit that xml to the db and get it from there. Can I stop rails validating and replacing my xml in json format? this is how it looks: --- !map:HashWithIndifferentAccess smil: !map:HashWithIndifferentAccess head: !map:HashWithIndifferentAccess layout: !map:HashWithIndifferentAccess root_layout: !map:HashWithIndifferentAccess height: &quot;600&quot; background_color: white width: &quot;800&quot; type: text/smil-basic-layout body: !map:HashWithIndifferentAccess par: !map:HashWithIndifferentAccess text: !map:HashWithIndifferentAccess left: &quot;33&quot; begin: &quot;33&quot; dur: &quot;33&quot; val: 34343434343434343aaaaaaa height: &quot;33&quot; width: &quot;33&quot; top: &quot;33&quot; And this is the ruby method from the rails webservice: # POST /messages # POST /messages.xml def create @message = Message.new(params[:message]) respond_to do |format| if @message.save flash[:notice] = 'Message was successfully created.' format.html { redirect_to(@message) } format.xml { render :xml => @message, :status => :created, :location => @message } else format.html { render :action => "new" } format.xml { render :xml => @message.errors, :status => :unprocessable_entity } end end end Is a workaround but for the moment this has to work ...

    Read the article

  • Unable to import Eclipse project to Android studio

    - by Binoy Babu
    Whenever I try to import my Eclipse project to Android Studio I get the following error : You are using an old, unsupported version of Gradle. Please use version 1.8 or greater. Please point to a supported Gradle version in the project's Gradle settings or in the project's Gradle wrapper (if applicable.) Consult IDE log for more details (Help | Show Log) Im using Android Studio 0.3 and Ubuntu, I also tried it on a Windows 8 box with fresh install but getting the same error. I'm using default gradle wrapper and I tried checking and unchecking auto import option. Is this a bug? How can I get around it. How do I update gradle to 1.8 or check the current gradle version? My build.gradle is given below. buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.3' // I also tried using 0.6.1 and 0.5.+ } } apply plugin: 'android' dependencies { compile fileTree(dir: 'libs', include: '*.jar') } android { compileSdkVersion 18 buildToolsVersion "18.0.1" sourceSets { main { manifest.srcFile 'AndroidManifest.xml' java.srcDirs = ['src'] resources.srcDirs = ['src'] aidl.srcDirs = ['src'] renderscript.srcDirs = ['src'] res.srcDirs = ['res'] assets.srcDirs = ['assets'] } // Move the tests to tests/java, tests/res, etc... instrumentTest.setRoot('tests') // Move the build types to build-types/<type> // For instance, build-types/debug/java, build-types/debug/AndroidManifest.xml, ... // This moves them out of them default location under src/<type>/... which would // conflict with src/ being used by the main source set. // Adding new build types or product flavors should be accompanied // by a similar customization. debug.setRoot('build-types/debug') release.setRoot('build-types/release') } }

    Read the article

  • How does GMail implement Comet?

    - by Morgan Cheng
    With the help of HttpWatch, I tried to figure out how GMail implement Comet. I Login in GMail with two account, one in IE and the other in Firefox. Chatting in GTalk in GMail with some magic words like "WASSUP". Then, I logoff both GMail accounts, filter any http content without "WASSUP" string. The result shows which HTTP request is the streaming channel. (Note: I have to logoff. Otherwise, never-ending HTTP would not show content in HttpWatch.) The result is interesting. The URL for stream channel is like: https://mail/channel/bind?VER=8&at=xn3j33vcvk39lkfq..... There is no surprise that GMail do Comet in IE with IFRAME. The Http content starts with " Originally, I guessed that GMail do Comet in Firefox with multipart XmlHttpRequest. To my surprise, the response header doesn't have "multipart/x-mixed-replace" header. The response headers are as below: HTTP/1.1 200 OK Content-Type: text/plain; charset=utf-8 Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: Fri, 01 Jan 1990 00:00:00 GMT Date: Sat, 20 Mar 2010 01:52:39 GMT X-Frame-Options: ALLOWALL Transfer-Encoding: chunked X-Content-Type-Options: nosniff Server: GSE X-XSS-Protection: 0 Unfortunately, the HttpWatch doesn't tell whether a HTTP request is from XmlHttpRequest or not. The content is not HTML but JSON. It looks like a response for XHR, but that would not work for Comet without multipart/x-mixed-replace, right? Is there any way else to figure out how GMail implement Comet? Thanks.

    Read the article

  • Front End Developer v/s PHP-MySQL Engineer

    - by user301943
    Hello, I want to decide which of this would be a more viable career option? I am ready to quit my current job and hence I am looking for new opportunity. Current job is maintainence and no more active development. My current role is of a PHP/MySQL Developer. I very well understand web-programming and am comfortable with RoR/Sinatra/Zend MVC/JQuery/JSON manipulation, etc. I understand MySQL InnoDB/MyISAM engine and how one differs from the other, etc. Basically, I could very well manage the deployment of a web-application end-to-end including configuration of Apache/Nginx servers, memcache,etc On the other hand, I am being offered a Sr.Front End Web developer that would require me to extensively write HTML/CSS crossbrowser/crossplatform compliant code. I very well understand XHTML/CSS/Box model etc. I would be working on Drupal for the management of websites. While I understand continuing to work on server-side technologies would always be a good career path, how would the role of Core front-end developer turn out to be? If I take this opportunity, will I eventually get a chance to focus onto UCD, HCI, Information Architect,etc. So are these kinda roles possible if I focus on front end development? No offenses to the Front end developers, just want to understand if this is something I want to gain a mastery over. I have 2 yrs of industry experience after graduating with a MS-Computer Science. Although, I have a CS degree, if I were to take uip serious front-end role; I could probably go back and take up some design/HCI/UI courses. Please advise.

    Read the article

  • Photo gallery not open in titanium 1.0

    - by user291247
    Hello, I am developing new app. using titanium 1.0 In that I am opening phtogallery in new window but I am not able to open it why this was happened? Code to open photogallery in app.js Titanium.App.addEventListener('recordvideo', function(e) { win1.close(); var w = Titanium.UI.createWindow({ backgroundColor:'#336699', title:'Modal Window', barColor:'black', url:'xhr_testfileupload.js' }); w.open({animated:true}); }); xhr_testfileupload.js code: var win = Titanium.UI.currentWindow; var ind=Titanium.UI.createProgressBar({ width:200, height:50, min:0, max:1, value:0, style:Titanium.UI.iPhone.ProgressBarStyle.PLAIN, top:10, message:'Uploading Image', font:{fontSize:12, fontWeight:'bold'}, color:'#888' }); win.add(ind); ind.show(); Titanium.Media.openPhotoGallery({ success:function(event) { Ti.API.info("success! event: " + JSON.stringify(event)); var image = event.media; var xhr = Titanium.Network.createHTTPClient(); xhr.onerror = function(e) { Ti.API.info('IN ERROR ' + e.error); }; xhr.onload = function() { Ti.API.info('IN ONLOAD ' + this.status + ' readyState ' + this.readyState); }; xhr.onsendstream = function(e) { ind.value = e.progress ; Ti.API.info('ONSENDSTREAM - PROGRESS: ' + e.progress); } // open the client xhr.open('POST','https://twitpic.com/api/uploadAndPost'); // send the data xhr.send({media:image,username:'fgsandford1000',password:'sanford1000',message:'check me out'}); }, cancel:function() { }, error:function(error) { }, allowImageEditing:true, });

    Read the article

  • scripsharp reference web service / strongly type to results model

    - by user175528
    With scriptsharp (script#) is it possible to get strong typing when calling a service defined in my web app? The only way I can see is to: 1 - use linked / shared files to shadow copy my results classes / domain models across into my script# lib 2 - replicate my model across in the script# lib and use automapper to validate? 3 - use some .tt to code gen? also, even if I can do this, how do I get around the auto camel-casing script# does, when my service result (asmx) wont do this? (so my JSON response will comback as UserMessage, script# will have changed that to userMessage) basically, what I am looking to use script# to achieve is better compile time support against our domain model when calling and processing services in javascript, so something like this: Scriptlet public static class MyScriptlet { public static void Main() { MyService.Service1("hello", ProcessResponse);} public static void ProcessResponse(MyService.Service1ResponseData resp) { jQuery.Select('#Message').Text(resp.UserMessage); jQuery.Select('#Detail').Text(resp.UserDetail); } Service (in our web app) public class MyService { public class Service1ResponseData { public string UserMessage {get;set;} public string UserDetail {get;set;} } public Service1ResponseData Service1(string user) { return new Service1ResponseData() { UserMessage:"hi",UserDetail:"some text"}; } }

    Read the article

  • jQuery validator not working in unit testing

    - by Dbugger
    I have this small HTML file: <html> <head></head> <body> <form id='MyForm'> <input type='text' required /> <input type='submit' /> </form> <script src="/js/jquery-1.9.0.js"></script> <script src="/js/jquery.validate.js"></script> <script> var validator = $("#MyForm").validate(); alert(validator.form()); </script> </body> </html> This alerts me with "false", which is the expected behaviour. The problem comes when I go to unit testing, with js-test-driver: TestCase("MyTests", { setUp: function() { this.myform = "<form id='MyForm'><input type='text' required /><input type='submit' /></form>"; this.validator = $(this.myform).validate(); jstestdriver.console.log("Does the form validate? " + this.validator.form()); }, test_empty: function() { }, }); This code returns me the string Does the form validate? true This is a simplified version of my project of course, but the point is that I dont seem to be able to unit test the validation module im developing, since the jQuery validate plugin doesnt seem to work. What am I missing?

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >