Search Results

Search found 17188 results on 688 pages for 'browser plugins'.

Page 617/688 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • C# WebBrowser Invoke issue

    - by James Jeffrey
    I am logging into facebook using a web browser. Everything works, but the problem is when I invoke the button click I need to check if the password is correct but, the check seems to happen before the button is invoked which makes no sense at all because the checking code is after the invoke. private void Facebook_Login(String username, String password) { webBrowser1.Url = new Uri("http://m.facebook.com"); while (webBrowser1.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); HtmlElementCollection inputs = webBrowser1.Document.GetElementsByTagName("input"); foreach(HtmlElement input in inputs) { if (input.GetAttribute("name") == "email") { input.SetAttribute("value", "[email protected]"); } if (input.GetAttribute("name") == "pass") { input.SetAttribute("value", "kelaroostj"); // dont worry that pass wont work lol. } if (input.GetAttribute("name") == "login") { input.InvokeMember("click"); } } while (webBrowser1.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); HtmlElementCollection bs = webBrowser1.Document.GetElementsByTagName("b"); foreach(HtmlElement b in bs) { MessageBox.Show(b.InnerHtml); } Log_Message("Logged into Facebook with: [email protected]"); }

    Read the article

  • Wrong colors when merging images with PHP

    - by OfficeJet
    Hi, I want to get images ID's and creat from files a merged image according to the given ID's. This code is called by ajax and return the image file name (which is the server time to prevent browser caching). code: if (isset($_REQUEST['items'])){ $req_items = $_REQUEST['items']; } else { $req_items = 'a'; } $items = explode(',',$req_items); $bg_img = imagecreatefrompng('bg.png'); for ($i=0; $i<count($items); $i++){ $main_img = $items[$i].'-large.png'; $image = imagecreatefrompng($main_img); $image_tc = imagecreatetruecolor(300, 200); imagecopy($image_tc,$image,0,0,0,0,300,200); $black = imagecolorallocate($image_tc, 0, 0, 0); imagecolortransparent($image_tc, $black); $opacity = 100; $bg_width = 300; $bg_height = 200; $dest_x = 0;//$image_size[0] - $bg_width - $padding; $dest_y = 0;//$image_size[1] - $bg_height - $padding; imagecopymerge($bg_img, $image_tc, $dest_x, $dest_y, 0, 0, $bg_width, $bg_height, $opacity) ; } $file = $_SERVER['REQUEST_TIME'].'.jpg'; imagejpeg($bg_img, $file, 100); echo $file; imagedestroy($bg_img); imagedestroy($image); die(); The images are shown exactly as I want but with wrong colors. I lately added the part with imagecreatetruecolor and imagecolortransparent, and still got wrong results. I also saved the PNG itself on a 24 bit format and also later as 8 bit - not helping. every ideas is very welcomed ! Thanks

    Read the article

  • Regular input in ASP.NET

    - by coffeeaddict
    Here's an example of a regular standard HTML input for my radiobuttonlist: <label><input type="radio" name="rbRSelectionGroup" checked value="0" />None</label> <asp:Repeater ID="rptRsOptions" runat="server"> <ItemTemplate> <div> <label><input type="radio" name="rbRSelectionGroup" value='<%# ((RItem)Container.DataItem).Id %>' /><%# ((RItem)Container.DataItem).Name %></label> </div> </ItemTemplate> </asp:Repeater> I removed some stuff for this thread, one being I put an r for some name that I do not want to expose here so just an fyi. Now, I would assume that this would or should happen: Page loads the first time, the None radio button is checked / defaulted I go and select a different radiobutton in this radiobutton list I do an F5 refresh in my browser The None radio button is pre-selected again after it has come back from the refresh but #4 is not happening. It's retaining the radiobutton that I selected in #2 and I don't know why. I mean in regular HTML it's stateless. So what could be holding this value? I want this to act like a normal input button. I know the question of "why not use an ASP.NET control" will come up. Well there are 2 reasons: The stupid radiobuttonlist bug that everyone knows about I just want to brush up more on standard input tags We are not moving to MVC so this is as close as I'll get and it's ok, because the rest of the team is on par with having mixed ASP.NET controls with standard HTML controls in our pages Anyway my main question here is I'm surprised that it's retaining the change in selection after postback.

    Read the article

  • Why does my CGI script keep redirecting links to localhost?

    - by Noah Brainey
    Visit this page http://online-file-sharing.net/tos.html and click one of the bottom footer links. It redirects you to your localhost in the address bar. I have no idea why it does this. This is in the main script that my entire website revolves around: upload.cgi $ENV{PATH} = '/bin:/usr/bin'; delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; ($ENV{DOCUMENT_ROOT}) = ($ENV{DOCUMENT_ROOT} =~ /(.*)/); # untaint. #$ENV{SCRIPT_NAME} = '/cgi-bin/upload.cgi'; use lib './perlmodules'; #use Time::HiRes 'gettimeofday'; #my $hires_start = gettimeofday(); my (%PREF,%TEXT) = (); No file is displayed when someone visits the root directory, although I have a .htaccess file saying to open my upload.cgi file which is located in my root directory. When I point my browser directly to the CGI file it works but it brings me to my localhost again. I'm hosting this website on my own server, which is this computer, and using XAMPP if this information helps. I'm also using DynDNS as my nameservers. I hope you can give me some insight.

    Read the article

  • HTMLInputElement in IE7

    - by Vladislav Qulin
    I'm writing an extension for crossbrowser input&textarea selection getter and setter. So that's the way i write my code: HTMLInputElement.prototype.getSelectionRange = get_selection_range; HTMLInputElement.prototype.setSelection = set_selection_range; HTMLTextAreaElement.prototype.getSelectionRange = get_selection_range; HTMLTextAreaElement.prototype.setSelection = set_selection_range; get_selection_range and set_selection_range are those extended functions. So i just wanted to replace someInputElement.selectionStart = a; // and whole lot of code due to browser someInputElement.selectionEnd = b; // compatibility with just someInputElement.setSelection(a, b); someInputElement.setSelection({ start: a, end: b }); someOtherElement.setSelection(someInputElement.getSelection()); But then i met couple of difficulties in IE7. First of all, IE7 doesnt know what is HTMLInputElement. I don't want to extend whole Object. Well, that would be the last thing i'll do, but i want to evade it. Functions get_selection_range and set_selection_range are alright, don't ask what's within, you've seen it couple of times already. So the question is: is there any legit substitution for HTMLInputElement in JS for IE7?

    Read the article

  • HTML + CSS: fixed background image and body width/min-width (including fiddle)

    - by insertusernamehere
    So, here is my problem. I'm kinda stuck at the moment. I have a huge background image and content in the middle with those attributes: content is centered with margin auto and has a fixed width the content is related to the image (like the image is continued within the content) this relation is only horizontally (vertically scrolling moves everything around) This works actually fine (I'm only talking desktop, not mobile here :) ) with a position fixed on the huge background image. The problem that occurs is the following: When I resize the window to "smaller than the content" the background image gets it width from the body instead of the viewport. So the relation between content and image gets lost. Now I have this little JavaScript which does the trick, but this is of course some overhead I want to avoid: $(window).resize(function(){ img.css('left', (body.width() - img.width()) / 2 ); }); This works with a fixed positioned image, but can get a litty jumpy while calculating. I also tried things like that: <div id="test" style=" position: absolute; z-index: 0; top: 0; left: 0; width: 100%: height: 100%; background: transparent url(content/dummy/brand_backgroud_1600_1.jpg) no-repeat fixed center top; "></div> But this gets me back to my problem described. Is there any "script-less", elegant solution for this problem? UPDATE: now with Fiddle The one I'm trying to solve: http://jsfiddle.net/insertusernamehere/wPmrm/ The one with Javascript that works: http://jsfiddle.net/insertusernamehere/j5E8z/ NOTE The image size is always fixed. The image never gets scaled by the browser. In the JavaScript example it get's blown. So don't care about the size.

    Read the article

  • Facebook Application with PHP running losing session

    - by ArneRie
    Iam trying to build an Facebook Application based on PHP. The Application is running under php on my own Webhost inside an Canvas as iFrame. I have included the newest Client Library for PHP from Facebook: facebook-php-sdk-94fcb13 To Authorize the user inside my application iam trying to use Facebook Connect, like the example shipped with the Client. Everything works fine the 1st Login, but when i hit the F5 Key to reload the page, the session is lost and i have to login again. When i call my application outside of the Facebook Canvas everything is fine. Iam not sure, but i think my Browser (Chrome/FireFox - Ubuntu) is not allowing to store an cookie inside an iFrame. Does someone knows an solution for this Problem? Here are some Parts of the Sourcecode: $facebook = new Facebook(array( 'appId' => 'x', 'secret' => 'x', 'cookie' => 'true', )); $session = $facebook->getSession(); $facebook->setSession($session); $me = null; // Session based API call. if ($session) { try { $uid = $facebook->getUser(); $me = $facebook->api('/me'); } catch (FacebookApiException $e) { error_log($e); } } // login or logout url will be needed depending on current user state. if ($me) { $logoutUrl = $facebook->getLogoutUrl(); } else { $loginUrl = $facebook->getLoginUrl(); }

    Read the article

  • Where is the Open Source alternative to WPF?

    - by Evan Plaice
    If we've learned anything from HTML/CSS it's that, declarative languages (like XML) work best to describe User Interfaces because: It's easy to build code preprocessors that can template the code effectively. The code is in a well defined well structured (ideally) format so it's easy to parse. The technology to effectively parse or crawl an XML based source file already exists. The UIs scripted code becomes much simpler and easier to understand. It simple enough that designers are able to design the interface themselves. Programmers suck at creating UIs so it should be made easy enough for designers. I recently took a look at the meat of a WPF application (ie. the XAML) and it looks surprisingly familiar to the declarative language style used in HTML. It's blindingly apparent to me that the current state of desktop UI development is largely fractionalized, otherwise there wouldn't be so much duplicated effort in the domain of user interfaces (IE. GTK, XUL, Qt, Winforms, WPF, etc). There are 45 GUI platforms for Python alone It's painfully obvious to me that there should be a general purpose, open source, standardized, platform independent, markup language for designing desktop GUIs. Much like what the W3C made HTML/CSS into. WPF, or more specifically XAML seems like a pretty likely step in the right direction. Why hasn't anyone in the Open Source community (AFAIK) even scratched the surface of this issue. Now that the 'browser wars' are over should we look forward to a future of 'desktop gui wars?' Note: This topic is relatively subjective in the attempt to be 'future-thinking.' I think that desktop GUI development in its current state sucks ((really)hard) and, even though WPF is still in it's infancy, it presents a likely solution to the problem. Has no one in the OS community looked into developing something similar because they don't see the value, or because it's not worth the effort?

    Read the article

  • How should developers cope with so many GUI configuration combinations?

    - by shawn-harrison
    These days, any decent Windows desktop application must perform well and look good under the following conditions: XP and Vista and Windows 7. 32 bit and 64 bit. With and without Themes. With and without Aero. At 96 and 120 and perhaps custom DPIs. One or more monitors (screens). Each OS has its own preferred font. Oh my! What is a lowly little Windows desktop application developer to do? :( I'm hoping to get a thread started with suggestions on how to deal with this GUI dilemma. First off, I'm on Delphi 7. a) Does Delphi 2010 bring anything new to the table to help with this situation? b) Should we pick an aftermarket component suite and rely on them to solve all these problems? c) Should we go with an aftermarket skinning engine? d) Perhaps a more HTML-type GUI is the way to go. Can we make a relatively complex GUI app with HTML that doesn't require using a browser? (prefer to keep it form based) e) Should we just knuckle down and code through each one of these scenarios and quit bitching about it? f) And finally, how in the world are we supposed to test all these conditions?

    Read the article

  • Most cost effective way to target multiple mobile platforms

    - by niidto
    Hi I have been given the tasks of speccing a mobile application, which will need to run on approx. 1000 devices. These devices already exist, and consist of iPhones, BlackBerrys, Androids, Windows Mobile and Netbooks. The application will have simple reporting capability, and a collection of forms. Anyway, the obvious solution would be to develop some browser based solution, although given the occasionally connected nature of the devices, there's a potential for data to get lost / not saved. So instead of creating a complex application for each platform, I was thinking we could build what is effectively a form generator, with basic offline storage capability (text files), designed to run on each device, and have the device generate a form, based on for example an XML file that it could request from a server somewhere, resulting in minimal specialist development costs, and the ability to run most of the logic from the server end, with the devices being dumb clients that render forms and upload the data when there is an available connection. Anyway, my question summarised is, how have you made the decision on supporting multiple devices for your application. Is this always an unavoidable problem, and you just have to make the call to support 1 or 2, or pay for developers to write code for each platform, or alternatively supply pre-installed devices to the company? Many thanks James

    Read the article

  • .htaccess Redirect Loop, adding multiple .php extensions

    - by Ryan Smith
    I have sort of a small parent/teacher social network set up for my school. I use my .htaccess file to make the links to teacher profiles cleaner and to add a trailing slash to all urls. I get this problem when going to /teachers/teacher-name/ the link (sometimes) redirects to /teachers/teacher-name.php.php.php.php.php.php.php.php... Below is my .htaccess file. Sometimes if I clear my browser cache in Chrome it temporarily fixes it. I can't exactly wright .htaccess syntax, but I'm pretty familiar with it. Any suggestions are appreciated! RewriteEngine on RewriteBase / #remove php ext RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^([^/]+)/$ $1.php RewriteRule ^([^/]+)/([^/]+)/$ $1/$2.php #force trailing slash/ RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)([^/])$ /$1$2/ [L,R=301] #other rewrites RewriteRule ^teachers/([^/\.]+)/$ /teachers/profile.php?u=$1 RewriteRule ^teachers/([^/\.]+)/posts/$ /teachers/posts.php?u=$1 RewriteRule ^teachers/([^/\.]+)/posts/([^/\.]+)/$ /teachers/post.php?u=$1&p=$2 RewriteRule ^gallery/([^/\.]+)/$ /gallery/album.php?n=$1 RewriteRule ^gallery/([^/\.]+)/slideshow/$ /gallery/slideshow.php?n=$1 RewriteRule ^gallery/([^/\.]+)/([^/\.]+)/([^/\.]+)/$ /gallery/photo.php?a=$1&p=$2&e=$3 EDIT:I have attached a screenshot of exactly what I'm talking about.

    Read the article

  • AjaxFileUpload return download panel when data is json

    - by Tr.Crab
    I use AjaxFileUpload (http://www.phpletter.com/Our-Projects/AjaxFileUpload/ ) to upload a file and get json result type response in struts2 ( code.google.struts2jsonresult.JSONResult ) but browser always pop-up download pane, plz give me some suggestions, thanks in advance Here is my config in struts.xml : ...... <result-type name="json" class="code.google.struts2jsonresult.JSONResult"> ............ <action name="doGetList" method="doGetList" class="main.java.GetListAction"> <result type="json"> <param name="target">jsonObject</param> <param name="deepSerialize">true</param> <param name="patterns"> -*.class</param> </result> </action> and js client : function ajaxFileUpload(){ $("#loading").ajaxStart(function(){ $(this).show(); }).ajaxComplete(function(){ $(this).hide(); }); $.ajaxFileUpload ( { url:'doGetList.do', secureuri:false, fileElementId:'uploadfile', dataType: 'json', success: function (data, status) { if(typeof(data.error) != 'undefined') { if(data.error != '') { alert(data.error); } else { alert(data.msg); } } }, error: function (data, status, e) { alert(e); alert(data.records); } } ) return false; }

    Read the article

  • .NET consumer of ActiveX throwing TargetParameterCountException

    - by DevSolo
    I have a .NET (3.5 w/ Dev Studio 2008) app that hosts a visual Active X (written in C++ w/ Dev Studio 2003). Have access to all sources, but can't easily move the Active X control up to 2008. This as worked fine in the past. Made some changes to the Active X control and now, when calling one method on the Active X, I'm getting a TargetParameterCountException 100% of the time. The signature of the Active X method is: LONG CMyActive::License(LPCTSTR string1, LPCTSTR string2, LONG long1, LPCTSTR string3, LPCTSTR string4); When viewing the method in object browser of reflector, .NET sees it as: public virtual int License(string string1, string string2, int long1, string string3, string string4) I renamed the parameters for demonstration purpose (boss gets twitchy about any code). I left the method name, as it could be relevant. There are method calls prior that work. I just can't seen to figure out why I'm all of a sudden getting this exception. The HRESULT is 0x8002000e and a quick search seems to indicate that's a general one. Thanks to all for reading.

    Read the article

  • How can I use JSONP to download client-side javascript objects?

    - by Alex Mcp
    I'm trying to get client-side javascript objects saved as a file locally. I'm not sure if this is possible. The basic architecture is this: Ping an external API to get back a JSON object Work client-side with that object, and eventually have a "download me" link This link sends the data to my server, which processes it and sends it back with a mime type application/json, which (should) prompt the user to download the file locally. Right now here are my pieces: Server Side Code <?php $data = array('zero', 'one', 'two', 'testing the encoding'); $json = json_encode($data); //$json = json_encode($_GET['']); //eventually I'll encode their data, but I'm testing header("Content-type: application/json"); header('Content-Disposition: attachment; filename="backup.json"'); echo $_GET['callback'] . ' (' . $json . ');'; ?> Relevant Client Side Code $("#download").click(function(){ var json = JSON.stringify(collection); //serializes their object $.ajax({ type: "GET", url: "http://www.myURL.com/api.php?callback=?", //this is the above script dataType: "jsonp", contentType: 'jsonp', data: json, success: function(data){ console.log( "Data Received: " + data[3] ); } }); return false; }); Right now when I visit the api.php site with Firefox, it prompts a download of download.json and that results in this text file, as expected: (["zero","one","two","testing the encoding"]); And when I click #download to run the AJAX call, it logs in Firebug Data Received: testing the encoding which is almost what I'd expect. I'm receiving the JSON string and serializing it, which is great. I'm missing two things: The Actual Questions What do I need to do to get the same prompt-to-download behavior that I get when I visit the page in a browser (much simpler) How do I access, server-side, the json object being sent to the server to serialize it? I don't know what index it is in the GET array (silly, I know, but I've tried almost everything)

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • WCF webHttpBinding with jQuery AJAX - removing/working around same origin policy

    - by csauve
    So I'm trying to create a C# WCF REST service that is called by jQuery. I've discovered that jQuery requires that AJAX calls are made under the same origin policy. I have a few questions for how I might proceed. I am already aware of; 1. The hacky solution of JSONP with a server callback 2. The way too much server overhead of having a cross-domain proxy. 3. Using Flash in the browser to make the call and setting up crossdomain.xml at my WCF server root. I'd rather not use these because; 1. I don't want to use JSON, or at least I don't want to be restricted to using it 2. I would like to separate the server that serves static pages from the one that serves application state. 3. Flash in this day in age is out of the question. What I'm thinking: is there anything like Flash's crossdomain.xml file that works for jQuery? Is this "same-origin" policy a part of jQuery or is it a restriction in specific browsers? If it's just a part of jQuery, maybe I'll try digging in the code to work around it.

    Read the article

  • Problem with sending cookies with file_get_contents

    - by Ikke
    Hi, i'm trying to get the contents from another file with file_get_contents (don't ask why). I have two files: test1.php and test2.php. Test1.php returns a string, bases on the user that is logged in. Test2.php tries to get the contents of test1.php and is being executed by the browser, thus getting the cookies. To send the cookies with file_get_contents, i create a streaming context: $opts = array('http' => array('header'=> 'Cookie: ' . $_SERVER['HTTP_COOKIE']."\r\n"))`; I'm retreiving the contents with: $contents = file_get_contents("http://www.domain.com/test1.php", false, $opts); But now I get the error: Warning: file_get_contents(http://www.domain.com/test1.php) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found Does somebody knows what i'm doing wroing here? edit: forgot to mention: Without the streaming_context, the page just loads. But withouth the cookies I don't get the info I need.

    Read the article

  • Sorting by custom field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and then files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). to display tree. Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) asc/desc After that for each result which has type = 'folder' i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such a thing?

    Read the article

  • Console Errors - Not a Jquery Guru Yet

    - by user2528902
    I am hoping that someone can help me to correct some issues that I am having with a custom script. I took over the management of a site and there seems to be an issue with the following code: /* jQUERY CUSTOM FUNCTION ------------------------------ */ jQuery(document).ready(function($) { $('.ngg-gallery-thumbnail-box').mouseenter(function(){ var elmID = "#"+this.id+" img"; $(elmID).fadeOut(300); }); $('.ngg-gallery-thumbnail-box').mouseleave(function(){ var elmID = "#"+this.id+" img"; $(elmID).fadeIn(300); }); var numbers = $('.ngg-gallery-thumbnail-box').size(); function A(i){ setInterval(function(){autoSlide(i)}, 7000); } A(0); function autoSlide(i) { var numbers = $('.ngg-gallery-thumbnail-box').size(); var elmCls = $("#ref").attr("class"); $(elmCls).fadeIn(300); var randNum = Math.floor((Math.random()*numbers)+1); var elmClass = ".elm"+randNum+" img"; $("#ref").attr("class", elmClass); $(elmClass).fadeOut(300); setInterval(function(){arguments.callee.caller(randNum)}, 7000); } }); The error that I am seeing in the console on Firebug is "TypeError: arguments.callee.caller is not a function. I am just getting started with jQuery and have no idea how to fix this issue. Any assistance with altering the code so that it still works but doesn't throw up all of these errors (if I load the site and let it sit in my browser for 10 minutes I have over 10000 errors in the console) would be greatly appreciated!

    Read the article

  • Is there a mechanism to distribute an app with its own JRE?

    - by user179997
    These fine folks are my users: http://www.youtube.com/watch?v=o4MwTvtyrUQ If you don't want to enjoy the video here is the gist: my users can't tell between a file and a folder, between a browser and a web site. I need to create a Java web app (Tomcat or Jetty) and deploy it in as many of their computers, Windows and Mac. The question is: Is there a mechanism to distribute an app with its own JRE? (in the Tcl world there are starpacks and starkits, in the Python world there's py2exe and others, that's the idea). And also, is it legal? I know the VM is open source but I'm not clear about the libraries, and I know about GNU Classpath but I don't know if all the packages are there. I don't want to depend on the installed JRE or on the user having enough privileges to install one. On the Mac I don't want to depend on Apple (I had to switch from Tiger to Snow Leopard just to have Java 1.6, I can't put my users in that position) Any info greatly appreciated. Thanks! jb edit: I'm wondering if I can just paste the JRE folder under my app folder. Is that allowed?

    Read the article

  • Intent provided by Cursor is not fired correctly (LiveFolders)

    - by Felix
    In my desperation with trying to get LiveFolders working, I have tried the following in my LiveFolder ContentProvider: public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { MatrixCursor mc = new MatrixCursor(new String[] { LiveFolders._ID, LiveFolders.NAME, LiveFolders.INTENT } ); Intent i = null; for (int j=0; j < 5; j++) { i = new Intent(Intent.ACTION_VIEW, Uri.parse("http://www.google.com/")); mc.addRow(new Object[] { j, "hello", i} ); } return mc; } Which, in all normalness, should launch the Browser and display the Google homepage when clicking on an item in the LiveFolder. But it doesn't. It gives a Application is not installed on your phone error. No, I'm not defining a base intent for my LiveFolder. logcat says: I/ActivityManager( 74): Starting activity: Intent { act=android.intent.action.VIEW dat=Intent { act=android.intent.action.VIEW dat=http://www.google.com/ } flg=0x10000000 } It seems it embeds the Intent I give it in the data section of the actually fired Intent. Why is it doing this? I'm really starting to believe it's a platform bug.

    Read the article

  • jQuery AJAX Redirection problem

    - by meosoft
    Hello please consider this: On page A I have a link that takes you to page B when JS is off, but when JS is on, I want to replace content on current page with content from the page B. Pages A and B are in fact the same script that is able to tell AJAX calls from regular ones and serve the content appropriately. Everything works fine, as long as there are no redirects involved. But, sometimes there is a 301 redirect and what seems to be happening is that client browser then makes a second request, which will return with a 200 OK. Only the second request is sent without a X-Requested-With header, therefore I cannot tell within my script wether it came from AJAX or not, and will send a complete page instead of just the content. I have tried checking for 301 status code in my error, success, and complete handlers but none of them worked. It seems to be handling the 301 behind the scenes. Could anyone help me with this? jQuery 1.4, PHP 5 Edit: People requested the code to this, which I didn't think was necessary but here goes: // hook up menu ajax loading $('#menu a').live("click", function(){ // update menu highlight if($(this).parents('#menu').size() > 0){ $("#menu>li").removeClass("current_page_item"); $(this).parent().addClass("current_page_item"); } // get the URL where we will be retrieving content from var url = $(this).attr('href'); window.location.hash = hash = url; $.ajax({ type: "GET", url: url, success: function(data){ // search for an ID that is only present if page is requested directly if($(data).find('#maincontent').size() > 0){ data = $(data).find('#maincontent .content-slide *').get(); } // the rest is just animating the content into view $("#scroller").html(data); $('.content-slide').each(setHeight); $('.content-slide').animate({ left: "0px" }, 1000, 'easeOutQuart', function(){ $('#home').css("left", "-760px").html(data); $('#scroller').css("left", "-760px"); $('.content-slide').each(setHeight); } ); } }); return false; });

    Read the article

  • Does anchor href of an image make it download?

    - by matthewsteiner
    So I was using yslow for firefox and my page weight was way high. My page has a main product image and then maybe 10 thumbnails. If you click a thumbnail, the image opens up in a popup done through jquery. The problem is, yslow is listing even the targets of the thumbnails as part of the page weight, so I guess for some reason the images are downloading. For example, I have: <a class="group nyroModal" rel="lightbox-group" href="/upload/topview.jpg"> <img alt="thumbnail" src="/upload/thumb/t_topview.jpg" /> </a> Would this normal html cause the "upload/topview.jpg" to automatically download? Or is it the jquery plugin "nyroModal"? I'd rather the images didn't preload, that'd waste a lot of bandwidth. So my question is, does a browser automatically try to download image files that are in the href property of anchors, or is the plugin most likely causing this? Thanks for any direction you can give me.

    Read the article

  • Can not access response.body inside after filter block in Sinatra 1.0

    - by Petr Vostrel
    I'm struggling with a strange issue. According to http://github.com/sinatra/sinatra (secion Filters) a response object is available in after filter blocks in Sinatra 1.0. However the response.status is correctly accessible, I can not see non-empty response.body from my routes inside after filter. I have this rackup file: config.ru require 'app' run TestApp Then Sinatra 1.0.b gem installed using: gem install --pre sinatra And this is my tiny app with a single route: app.rb require 'rubygems' require 'sinatra/base' class TestApp < Sinatra::Base set :root, File.dirname(__FILE__) get '/test' do 'Some response' end after do halt 500 if response.empty? # used 500 just for illustation end end And now, I would like to access the response inside the after filter. When I run this app and access /test URL, I got a 500 response as if the response is empty, but the response clearly is 'Some response'. Along with my request to /test, a separate request to /favicon.ico is issued by the browser and that returns 404 as there is no route nor a static file. But I would expect the 500 status to be returned as the response should be empty. In console, I can see that within the after filter, the response to /favicon.ico is something like 'Not found' and response to /test really is empty even though there is response returned by the route. What do I miss?

    Read the article

  • Facebook offline access step-by-step

    - by Ben
    After searchinge litteraly 1 day on fb and google for an up-to-date and working way to do something seemingly simple: I am looking for a step-by-step explanation to get offline_access for a user for a facebook app and then using this (session key) to retrieve offline & not within a browser friends & profile data. Preferrably doing this in the Fb Java API. Thanks. And yes I did check the facebook wiki. Update: Anyone? this: http://www.facebook.com/authorize.php?api_key=<api-key>&v=1.0&ext_perm=offline_access gives me offline_Access, however how to retrieve the session_key? Why can't facebook just do simple documentation, I mean there are like 600 people working there? The seemingly same question: http://stackoverflow.com/questions/617043/getting-offlineaccess-to-work-with-facebook Does not answer how to retrieve the session key Edit: I am still stuck with that. I guess nobody really tried such a batch access out yet...

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >