Search Results

Search found 13797 results on 552 pages for 'browser madness'.

Page 523/552 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Just a small problem regarding javscript BOM question

    - by caramel1991
    The question is this: Create a page with a number of links. Then write code that fires on the window onload event, displaying the href of each of the links on the page. And this is my solution <html> <body language="Javascript" onload="displayLink()"> <a href="http://www.google.com/">First link</a> <a href="http://www.yahoo.com/">Second link</a> <a href="http://www.msn.com/">Third link</a> <script type="text/javascript" language="Javascript"> function displayLink() { for(var i = 0;document.links[i];i++) { alert(document.links[i].href); } } </script> </body> </html> This is the answer provided by the book <html> <head> <script language=”JavaScript” type=”text/javascript”> function displayLinks() { var linksCounter; for (linksCounter = 0; linksCounter < document.links.length; linksCounter++) { alert(document.links[linksCounter].href); } } </script> </head> <body onload=”displayLinks()”> <A href=”link0.htm” >Link 0</A> <A href=”link1.htm”>Link 2</A> <A href=”link2.htm”>Link 2</A> </body> </html> Before I get into the javascript tutorial on how to check user browser version or model,I was using the same method as the example,by acessing the length property of the links array for the loop,but after I read through the tutorial,I find out that I can also use this alternative ways,by using the method that the test condition will evalute to true only if the document.links[i] return a valid value,so does my code is written using the valid method??If it's not,any comment regarding how to write a better code??Correct me if I'm wrong,I heard some of the people say "a good code is not evaluate solely on whether it works or not,but in terms of speed,the ability to comprehend the code,and could posssibly let others to understand the code easily".Is is true??

    Read the article

  • Ajax doesn't work on remote server .

    - by Nuha
    Hello . when I Implemented chatting Function , I use Ajax to send messages between file to another . so , it is working well on local host . but , when I upload it in to remote server it doesn't work. can U tell me ,why ? is an Ajax need Special configuration ? Ajax code : function Ajax_Send(GP,URL,PARAMETERS,RESPONSEFUNCTION){? var xmlhttp? try{xmlhttp=new ActiveXObject("Msxml2.XMLHTTP")}? catch(e){? try{xmlhttp=new ActiveXObject("Microsoft.XMLHTTP")}? catch(e){? try{xmlhttp=new XMLHttpRequest()}? catch(e){? alert("Your Browser Does Not Support AJAX")}}}? ? err=""? if (GP==undefined) err="GP "? if (URL==undefined) err +="URL "? if (PARAMETERS==undefined) err+="PARAMETERS"? if (err!=""){alert("Missing Identifier(s)\n\n"+err);return false;}? ? xmlhttp.onreadystatechange=function(){? if (xmlhttp.readyState == 4){? if (RESPONSEFUNCTION=="") return false;? eval(RESPONSEFUNCTION(xmlhttp.responseText))? }? }? ? if (GP=="GET"){? URL+="?"+PARAMETERS? xmlhttp.open("GET",URL,true)? xmlhttp.send(null)? }? ? if (GP="POST"){? PARAMETERS=encodeURI(PARAMETERS)? xmlhttp.open("POST",URL,true)? xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded")? xmlhttp.setRequestHeader("Content-length",PARAMETERS.length)? xmlhttp.setRequestHeader("Connection", "close")? xmlhttp.send(PARAMETERS)? }? }

    Read the article

  • error with redirect using listener JSF 2.0

    - by Ray
    I have a index.xhtml page <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:ui="http://java.sun.com/jsf/facelets"> <f:view> <ui:insert name="metadata" /> <f:event type="preRenderView" listener="#{item.show}" /> <h:body></h:body> </f:view> </html> And in bean class with scope session this method public void show() throws IOException, DAOException { ExternalContext externalContext = FacesContext.getCurrentInstance() .getExternalContext(); //smth String rootPath = externalContext.getRealPath("/"); String realPath = rootPath + "pages\\template\\body\\list.xhtml"; externalContext.redirect(realPath); } i think that I should redirect to next page but I have "browser can't show page" and list.xhtml (if I do this page as welcome-page I haven't error, it means that error connected with redirect) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:ui="http://java.sun.com/jsf/facelets"> <h:body> <ui:composition template="/pages/layouts/mainLayout.xhtml"> <ui:define name="content"> <h:form></h:form></ui:define></ui:composition> </h:body> </html> in consol i didn't have any error. in web.xml <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> What can be the reason this problem?

    Read the article

  • WPF ClickOnce Bootstrap Dection Failure on One Machine

    - by Dexter Morgan
    Hello Friend, I've decided to use ClickOnce technology to deploy my new WPF application. By and large, ClickOnce works as advertised but I've hit a minor glitch regarding Bootstrapping and framework detection. Some background: - I'm using the standard Visual Studio-generated publish.htm page as my launch page. - The only prerequisite is the .NET Framework 4.0 Client Profile. - All clients using IE 8. - All clients already have the .NET 4.0 Client Profile installed. ClickOnce works as advertised on the vast majority of machines. The VS-generated JScript correctly detects that the framework is installed and presents the user with a Run button. The app launches just fine. I'm getting odd results on one of the machines, however. On the offending machine, the VS-generated JScript tells the user that the prereqs may not be installed -- or rather, it FAILS to detect that the framework is already installed. The "launch" link successfully launches the application but the Run link points to the bootstrapper setup.exe. Why is it failing to detect the framework on this one machine? It occurred to me that framework detection is largely a matter of examining the useragent string that's submitted by the browser. So, what you see below are two UserAgent strings. The first is from a machine where things are working properly. The second is from the offending machine. THIS ONE WORKS: 2011-01-11 15:14:14 W3SVC1 192.168.0.36 GET /publish.htm - 80 - 72.130.187.100 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.0;+Trident/4.0;+SLCC1;+.NET+CLR+2.0.50727;+Media+Center+PC+5.0;+.NET+CLR+3.5.21022;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+.NET4.0C) 304 0 0 THIS ONE DOESN'T: 2011-01-11 18:49:12 W3SVC1 192.168.0.36 GET /publish.htm - 80 - 76.212.204.169 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.1;+WOW64;+Trident/4.0;+GTB6.6;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+.NET4.0C) 200 0 0 The useragent string of both machines clearly states, "hey the .NET 4.0 client profile is installed here" -- yet the second machine seems unable to detect it. I don't know enough about useragent strings to understand why the former works and the latter fails. The only difference as far as I can tell is that the offending machine is running 64bit. But that shouldn't make a difference. Should it? Any ideas? Dexter Morgan

    Read the article

  • Rails 3 Atom Feed

    - by scud bomb
    Trying to create an atom feed in Rails 3. When i refresh my browser i see basic XML, not the Atom feed im looking for. class PostsController < ApplicationController # GET /posts # GET /posts.xml def index @posts = Post.all respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.atom end end index.atom.builder atom_feed do |feed| feed.title "twoconsortium feed" @posts.each do |post| feed.entry(post) do |entry| entry.title post.title entry.content post.text end end end localhost:3000/posts.atom looks like this: <?xml version="1.0" encoding="UTF-8"?> <feed xml:lang="en-US" xmlns="http://www.w3.org/2005/Atom"> <id>tag:localhost,2005:/posts</id> <link rel="alternate" type="text/html" href="http://localhost:3000"/> <link rel="self" type="application/atom+xml" href="http://localhost:3000/posts.atom"/> <title>my feed</title> <entry> <id>tag:localhost,2005:Post/1</id> <published>2012-03-27T18:26:13Z</published> <updated>2012-03-27T18:26:13Z</updated> <link rel="alternate" type="text/html" href="http://localhost:3000/posts/1"/> <title>First post</title> <content>good stuff</content> </entry> <entry> <id>tag:localhost,2005:Post/2</id> <published>2012-03-27T19:51:18Z</published> <updated>2012-03-27T19:51:18Z</updated> <link rel="alternate" type="text/html" href="http://localhost:3000/posts/2"/> <title>Second post</title> <content>its that second post type stuff</content> </entry> </feed>

    Read the article

  • php download file slows

    - by hobbywebsite
    OK first off thanks for your time I wish I could give more than one point for this question. Problem: I have some music files on my site (.mp3) and I am using a php file to increment a database to count the number of downloads and to point to the file to download. For some reason this method starts at 350kb/s then slowly drops to 5kb/s which then the file says it will take 11hrs to complete. BUT if I go directly to the .mp3 file my browser brings up a player and then I can right click and "save as" which works fine complete download in 3mins. (Yes both during the same time for those that are thinking it's my connection or ISP and its not my server either.) So the only thing that I've been playing around with recently is the php.ini and the .htcaccess files. So without further ado, the php file, php.ini, and the .htcaccess: download.php <?php include("config.php"); include("opendb.php"); $filename = 'song_name'; $filedl = $filename . '.mp3'; $query = "UPDATE songs SET song_download=song_download+1 WHER song_linkname='$filename'"; mysql_query($query); header('Content-Disposition: attachment; filename='.basename($filedl)); header('Content-type: audio/mp3'); header('Content-Length: ' . filesize($filedl)); readfile('/music/' . $filename . '/' . $filedl); include("closedb.php"); ?> php.ini register_globals = off allow_url_fopen = off expose_php = Off max_input_time = 60 variables_order = "EGPCS" extension_dir = ./ upload_tmp_dir = /tmp precision = 12 SMTP = relay-hosting.secureserver.net url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=,fieldset=" ; Defines the default timezone used by the date functions date.timezone = "America/Los_Angeles" .htaccess Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} !^(www.MindCollar.com)?$ [NC] RewriteRule (.*) http://www.MindCollar.com/$1 [R=301,L] <IfModule mod_rewrite.c> RewriteEngine On ErrorDocument 404 /errors/404.php ErrorDocument 403 /errors/403.php ErrorDocument 500 /errors/500.php </IfModule> Options -Indexes Options +FollowSymlinks <Files .htaccess> deny from all </Files> thanks for you time

    Read the article

  • How to use Mozilla ActiveX Control without registry

    - by Andrew McKinlay
    I've been using the IE Browser component that is part of Windows. But I'm running into problems with security settings. For example, users get security warnings on pages with Javascript. So I'm looking at using the Mozilla ActiveX control instead. It's especially nice because it has a compatible interface. It works well if I let it install the control in the registry. But my users don't always have administrator rights to install things in the registry. So I'm trying to figure out how to use the control without registry changes. I'm using DllGetClassObject to get the class factory (IID_ICLASSFACTORY) and then CoRegisterClassObject to register it. All the API calls appear to succeed. And when I create an AtlAxWin window with the CLSID, it also appears to work. But when I try to call Navigate on the AtlAxGetControl it doesn't work - the interface doesn't have Navigate. I would show the code but it's in an obscure language (Suneido) so it wouldn't mean much. An example in C or C++ would be easy for me to translate. Or an example in another dynamic language like Python or Ruby might be helpful. Obviously I'm doing something wrong. Maybe I'm passing the wrong thing to CoRegisterClassObject? The MSDN documentation isn't very clear on what to pass and I haven't found any good examples. Or if there is another approach, I'm ok with that too. Note: I'm using the AtlAxWin window class so I'm not directly creating the control and can't use this approach. Another option is registry free com with a manifest. But again, I couldn't find a good example, especially since I'm not using Visual Studio. I tried to use the MT manifest tool, but couldn't figure it out. I don't think I can use DLL redirection since that doesn't get around the registry issue AFAIK. Another possibility is using WebKit but it seems even harder to use.

    Read the article

  • How to Fix my jQuery code in IE?? Works in Firefox..

    - by scott jarvis
    I am using jQuery to show/hide a div container (#pluginOptionsContainer), and load a page (./plugin_options.php) inside it with the required POST vars sent. What POST data is sent is based on the value of a select list (#pluginDD) and the click of a button (#pluginOptionsBtn)... It works fine in Firefox, but doesn't work in IE.. The '$("#pluginOptionsContainer").load()' request never seems to finish in IE - I only see the loading message forever... bind(), empty() and append() all seem to work fine in IE.. But not load().. Here is my code: // wait for the DOM to be loaded $(document).ready(function() { // hide the plugin options $('#pluginOptionsContainer').hide(); // This is the hack for IE if ($.browser.msie) { $("#pluginDD").click(function() { this.blur(); this.focus(); }); } // set the main function $(function() { // the button shows hides the plugin options page (and its container) $("#pluginOptionsBtn") .click(function() { // show the container of the plugin options page $('#pluginOptionsContainer').empty().append('<div style="text-align:center;width:99%;">Loading...</div>'); $('#pluginOptionsContainer').toggle(); }); // set the loading message if user changes selection with either the dropdown or button $("#pluginDD,#pluginOptionsBtn").bind('change', function() { $('#pluginOptionsContainer').empty().append('<div style="text-align:center;width:99%;">Loading...</div>'); }); // then update the page when the plugin is changed when EITHER the plugin button or dropdown or clicked or changed $("#pluginDD,#pluginOptionsBtn").bind('change click', function() { // set form fields as vars in js var pid = <?=$pid;?>; var cid = <?=$contentid;?>; var pDD = $("#pluginDD").val(); // add post vars (must use JSON) to be sent into the js var 'dataString' var dataString = {plugin_options: true, pageid: pid, contentid: cid, pluginDD: pDD }; // include the plugin option page inside the container, with the required values already added into the query string $("#pluginOptionsContainer").load("/admin/inc/edit/content/plugin_options.php#pluginTop", dataString); // add this to stop page refresh return false; }); // end submit function }); // end main function }); // on DOM load Any help would be GREATLY appreciated! I hate IE!

    Read the article

  • Intent filter for browsing XML (specifically rss) in android

    - by Leif Andersen
    I have an activity that I want to run every time the user goes to an xml (specifically rss) page in the browser (at least assuming the user get's it from the list of apps that can support it). I currently already have the current intent filter: <activity android:name=".activities.EpisodesListActivity" android:theme="@android:style/Theme.NoTitleBar"> <intent-filter> <category android:name="android.intent.category.BROWSABLE"></category> <category android:name="android.intent.category.DEFAULT"></category> <action android:name="android.intent.action.VIEW"></action> <data android:scheme="http"></data> </intent-filter> </activity> Now as you can guess, this is an evil intent, as it wants to open whenever a page is requested via http. However, when I ad the line: <data android:mimeType="application/rss+xml"></data> to make it: <activity android:name=".activities.EpisodesListActivity" android:theme="@android:style/Theme.NoTitleBar"> <intent-filter> <category android:name="android.intent.category.BROWSABLE"></category> <category android:name="android.intent.category.DEFAULT"></category> <action android:name="android.intent.action.VIEW"></action> <data android:scheme="http"></data> <data android:mimeType="application/rss+xml"></data> </intent-filter> </activity> The application no longer claims to be able to run rss files. Also, if I change the line to: <data android:mimeType="application/xml"></data> It also won't work (for generic xml file even). So what intent filter do I need to make in order to claim that the activity supports rss. (Also, bonus points if you can tell me how I know what URL it was the user opened. So far, I've always sent that information from one activity to the other using extras). Thank you for your help

    Read the article

  • Stored procedure performance randomly plummets; trivial ALTER fixes it. Why?

    - by gWiz
    I have a couple of stored procedures on SQL Server 2005 that I've noticed will suddenly take a significantly long time to complete when invoked from my ASP.NET MVC app running in an IIS6 web farm of four servers. Normal, expected completion time is less than a second; unexpected anomalous completion time is 25-45 seconds. The problem doesn't seem to ever correct itself. However, if I ALTER the stored procedure (even if I don't change anything in the procedure, except to perhaps add a space to the script created by SSMS Modify command), the completion time reverts to expected completion time. IIS and SQL Server are running on separate boxes, both running Windows Server 2003 R2 Enterprise Edition. SQL Server is Standard Edition. All machines have dual Xeon E5450 3GHz CPUs and 4GB RAM. SQL Server is accessed using its TCP/IP protocol over gigabit ethernet (not sure what physical medium). The problem is present from all web servers in the web farm. When I invoke the procedure from a query window in SSMS on my development machine, the procedure completes in normal time. This is strange because I was under the impression that SSMS used the same SqlClient driver as in .NET. When I point my development instance of the web app to the production database, I again get the anomalous long completion time. If my SqlCommand Timeout is too short, I get System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Question: Why would performing ALTER on the stored procedure, without actually changing anything in it, restore the completion time to less than a second, as expected? Edit: To clarify, when the procedure is running slow for the app, it simultaneously runs fine in SSMS with the same parameters. The only difference I can discern is login credentials (next time I notice the behavior, I'll be checking from SSMS with the same creds). The ultimate goal is to get the procs to sustainably run with expected speed without requiring occasional intervention. Resolution: I wanted to to update this question in case others are experiencing this issue. Following the leads of the answers below, I was able to consistently reproduce this behavior. In order to test, I utilize sp_recompile and pass it one of the susceptible sprocs. I then initiate a website request from my browser that will invoke the sproc with atypical parameters. Lastly, I initiate a website request to a page that invokes the sproc with typical parameters, and observe that the request does not complete because of a SQL timeout on the sproc invocation. To resolve this on SQL Server 2005, I've added OPTIMIZE FOR hints to my SELECT. The sprocs that were vulnerable all have the "all-in-one" pattern described in this article. This pattern is certainly not ideal but was a necessary trade-off given the timeframe for the project.

    Read the article

  • Trouble passing a string as a SQLite ExecSQL command

    - by Hackbrew
    I keep getting the ERROR: near "PassWord": syntax error when trying to execute the ExecSQL() statement. The command looks good in the output of the text file. In fact, I copied & pasted the command directly into SQLite Database Browser and the commend executed properly. Here's the code that's producing the error: procedure TForm1.Button1Click(Sender: TObject); var i, iFieldSize: integer; sFieldName, sFieldType, sFieldList, sExecSQL: String; names: TStringList; f1: Textfile; begin //Open Source table - Table1 has 8 fields but has only two different field types ftString and Boolean Table1.TableName:= 'PWFile'; Table1.Open; //FDConnection1.ExecSQL('drop table PWFile'); sFieldList := ''; names := TStringList.Create; for i := 0 to Table1.FieldCount - 1 do begin sFieldName := Table1.FieldDefList.FieldDefs[i].Name; sFieldType := GetEnumName(TypeInfo(TFieldType),ord(Table1.FieldDefList.FieldDefs[i].DataType)); iFieldSize := Table1.FieldDefList.FieldDefs[i].Size; if sFieldType = 'ftString' then sFieldType := 'NVARCHAR' + '(' + IntToStr(iFieldSize) + ')'; if sFieldType = 'ftBoolean' then sFieldType := 'INTEGER'; names.Add(sFieldName + ' ' + sFieldType); if sFieldList = '' then sFieldList := sFieldName + ' ' + sFieldType else sFieldList := sFieldList + ', ' + sFieldName + ' ' + sFieldType; end; ListBox1.Items.Add(sFieldList); sExecSQL := 'create table IF NOT EXISTS PWFile (' + sFieldList + ')'; // 08/18/2014 - Entered this to log the SQLite FDConnection1.ExecSQL Command to a file AssignFile(f1, 'C:\Users\Test User\Documents\SQLite_Command.txt'); Rewrite(f1); Writeln(f1, sExecSQL); { insert code here that would require a Flush before closing the file } Flush(f1); { ensures that the text was actually written to file } CloseFile(f1); FDConnection1.ExecSQL(sFieldList); Table1.Close; end; Here's the actual command that gets executed: create table IF NOT EXISTS PWFile (PassWord NVARCHAR(10), PassName NVARCHAR(10), Dept NVARCHAR(10), Active NVARCHAR(1), Admin INTEGER, Shred INTEGER, Reports INTEGER, Maintain INTEGER)

    Read the article

  • How do I pass the value of the previous form element into an "onchange" javascript function?

    - by Jen
    Hello, I want to make some UI improvements to a page I am developing. Specifically, I need to add another drop down menu to allow the user to filter results. This is my current code: HTML file: <select name="test_id" onchange="showGrid(this.name, this.value, 'gettestgrid')"> <option selected>Select a test--></option> <option value=1>Test 1</option> <option value=2>Test 2</option> <option value=3>Test 3</option> </select> This is pseudo code for what I want to happen: <select name="test_id"> <option selected>Select a test--></option> <option value=1>Test 1</option> <option value=2>Test 2</option> <option value=3>Test 3</option> </select> <select name="statistics" onchange="showGrid(PREVIOUS.name, PREVIOUS.VALUE, THIS.value)"> <option selected>Select a data display --></option> <option value='gettestgrid'>Show averages by student</option> <option value='gethomeroomgrid'>Show averages by homeroom</option> <option value='getschoolgrid'>Show averages by school</option> </select> How do I access the previous field's name and value? Any help much appreciated, thx! Also, JS function for reference: function showGrid(name, value, phpfile) { xmlhttp=GetXmlHttpObject(); if (xmlhttp==null) { alert ("Browser does not support HTTP Request"); return; } var url=phpfile+".php"; url=url+"?"+name+"="+value; url=url+"&sid="+Math.random(); xmlhttp.onreadystatechange=stateChanged; xmlhttp.open("GET",url,true); xmlhttp.send(null); }

    Read the article

  • How to download file into string with progress callback?

    - by Kaminari
    I would like to use the WebClient (or there is another better option?) but there is a problem. I understand that opening up the stream takes some time and this can not be avoided. However, reading it takes a strangely much more amount of time compared to read it entirely immediately. Is there a best way to do this? I mean two ways, to string and to file. Progress is my own delegate and it's working good. FIFTH UPDATE: Finally, I managed to do it. In the meantime I checked out some solutions what made me realize that the problem lies elsewhere. I've tested custom WebResponse and WebRequest objects, library libCURL.NET and even Sockets. The difference in time was gzip compression. Compressed stream lenght was simply half the normal stream lenght and thus download time was less than 3 seconds with the browser. I put some code if someone will want to know how i solved this: (some headers are not needed) public static string DownloadString(string URL) { WebClient client = new WebClient(); client.Headers["User-Agent"] = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5"; client.Headers["Accept"] = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; client.Headers["Accept-Encoding"] = "gzip,deflate,sdch"; client.Headers["Accept-Charset"] = "ISO-8859-2,utf-8;q=0.7,*;q=0.3"; Stream inputStream = client.OpenRead(new Uri(URL)); MemoryStream memoryStream = new MemoryStream(); const int size = 32 * 4096; byte[] buffer = new byte[size]; if (client.ResponseHeaders["Content-Encoding"] == "gzip") { inputStream = new GZipStream(inputStream, CompressionMode.Decompress); } int count = 0; do { count = inputStream.Read(buffer, 0, size); if (count > 0) { memoryStream.Write(buffer, 0, count); } } while (count > 0); string result = Encoding.Default.GetString(memoryStream.ToArray()); memoryStream.Close(); inputStream.Close(); return result; } I think that asyncro functions will be almost the same. But i will simply use another thread to fire this function. I dont need percise progress indication.

    Read the article

  • How to detect for screenreaders/MSAA without focusing the flash movie?

    - by utt73
    I am trying to detect for the presence of assistive technology using flash. When a Flash movie holding the actionscript below on frame 1 is loaded (and screenreader chatting to IE or Firefox over MSAA is active -- JAWS or NVDA), Accessibility.isActive() does not return "true" until the movie is focused. Well, actually not until some "event" happens. The movie will just sit there until I right-click it & show flash player's context menu... it seems only then Accessibility.isActive() returns true. Right-clicking is the only way I could get the movie to "wake up". How do I get the movie to react on it's own and detect MSAA? I've tried sending focus to it with Javascript... can a fake a right-click in javascript or actionscript? Or do you know the events a right click is firing in a flash movie -- possibly I can programatically make that event happen? My Actionscript: var x = 0; //check if Microsoft Active Accessibility (MSAA) is active. //Setting takes 1-2 seconds to detect -- hence the setTimeout loop. function check508(){ if ( Accessibility.isActive() ) { //remove this later... just visual for testing logo.glogo.logotext.nextFrame(); //tell the page's javascript this is a 508 user getURL("javascript:setAccessible();") } else if (x<100) { trace ("There is currently no active accessibility aid. Attempt " + x); x++; setTimeout(check508,200); } } /* //FYI: only checks if browser can talk to MSAA, not that it is actually running. Sigh. if (System.capabilities.hasAccessibility) { logo.glogo.logotext.nextFrame(); getURL("javascript:setAccessible();") }; */ check508(); stop(); My HTML: <embed id="detector" width="220" height="100" quality="high" wmode="window" type="application/x-shockwave-flash" src="/images/detect.swf" pluginspage="http://www.adobe.com/go/getflashplayer" flashvars="">

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • highlighting search results in php error

    - by fusion
    i'm trying to figure out what is wrong in this code. it either doesn't highlight the search result OR it outputs html tags surrounding the highlighted text. . $search_result = ""; $search_result = trim($search_result); $special_cases = array( '%', '_', '+' ); $search_result = str_replace( $special_cases, '', $_GET["q"] ); //Check if the string is empty if ($search_result == "") { echo "<p>Search Error</p><p>Please enter a search...</p>" ; exit(); } $result = mysql_query('SELECT cQuotes, vAuthor, cArabic, vReference FROM thquotes WHERE cQuotes LIKE "%' . mysql_real_escape_string($search_result) .'%" ORDER BY idQuotes DESC', $conn) or die ('Error: '.mysql_error()); //eliminating special characters function h($s) { echo htmlspecialchars($s, ENT_QUOTES); } function highlightWords($string, $word) { $string = str_replace($word, "<span style='background-color: #FFE066;font-weight:bold;'>".$word."</span>", $string); /*** return the highlighted string ***/ return $string; } ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords($row['cQuotes'], $search_result); ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php h($cQuote); ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> </div> on the browser, it is outputted as: A good <span style='background-color: #FFE066;font-weight:bold;'>action</span> is an ever-remaining store and a pure yield or if a div is used with class: A good <div class='highlight'>action</div> is an ever-remaining store and a pure yield

    Read the article

  • Hidden divs for "lazy javascript" loading? Possible security/other issues?

    - by xyld
    I'm curious about people's opinion's and thoughts about this situation. The reason I'd like to lazy load javascript is because of performance. Loading javascript at the end of the body reduces the browser blocking and ends up with much faster page loads. But there is some automation I'm using to generate the html (django specifically). This automation has the convenience of allowing forms to be built with "Widgets" that output content it needs to render the entire widget (extra javascript, css, ...). The problem is that the widget wants to output javascript immediately into the middle of the document, but I want to ensure all javascript loads at the end of the body. When the following widget is added to a form, you can see it renders some <script>...</script> tags: class AutoCompleteTagInput(forms.TextInput): class Media: css = { 'all': ('css/jquery.autocomplete.css', ) } js = ( 'js/jquery.bgiframe.js', 'js/jquery.ajaxQueue.js', 'js/jquery.autocomplete.js', ) def render(self, name, value, attrs=None): output = super(AutoCompleteTagInput, self).render(name, value, attrs) page_tags = Tag.objects.usage_for_model(DataSet) tag_list = simplejson.dumps([tag.name for tag in page_tags], ensure_ascii=False) return mark_safe(u'''<script type="text/javascript"> jQuery("#id_%s").autocomplete(%s, { width: 150, max: 10, highlight: false, scroll: true, scrollHeight: 100, matchContains: true, autoFill: true }); </script>''' % (name, tag_list,)) + output What I'm proposing is that if someone uses a <div class=".lazy-js">...</div> with some css (.lazy-js { display: none; }) and some javascript (jQuery('.lazy-js').each(function(index) { eval(jQuery(this).text()); }), you can effectively force all javascript to load at the end of page load: class AutoCompleteTagInput(forms.TextInput): class Media: css = { 'all': ('css/jquery.autocomplete.css', ) } js = ( 'js/jquery.bgiframe.js', 'js/jquery.ajaxQueue.js', 'js/jquery.autocomplete.js', ) def render(self, name, value, attrs=None): output = super(AutoCompleteTagInput, self).render(name, value, attrs) page_tags = Tag.objects.usage_for_model(DataSet) tag_list = simplejson.dumps([tag.name for tag in page_tags], ensure_ascii=False) return mark_safe(u'''<div class="lazy-js"> jQuery("#id_%s").autocomplete(%s, { width: 150, max: 10, highlight: false, scroll: true, scrollHeight: 100, matchContains: true, autoFill: true }); </div>''' % (name, tag_list,)) + output Nevermind all the details of my specific implementation (the specific media involved), I'm looking for a consensus on whether the method of using lazy-loaded javascript through hidden a hidden tags can pose issues whether security or other related? One of the most convenient parts about this is that it follows the DRY principle rather well IMO because you don't need to hack up a specific lazy-load for each instance in the page. It just "works". UPDATE: I'm not sure if django has the ability to queue things (via fancy template inheritance or something?) to be output just before the end of the </body>?

    Read the article

  • How can I stop Flash from changing indent when user Clicks on hyperlink in TextField?

    - by Paul Chernoch
    I have a TextField which I initialize by setting htmlText. The text has anchor tags (hyperlinks). When a user clicks on the hyperlink, the indentation of the second and subsequent lines in the paragraph changes. Why? How do I stop it? My html has an image at the beginning of the line, followed by the tag, followed by more text. To style the hyper links to look blue always and underlined when the mouse is over them, I do this: var css:StyleSheet = new StyleSheet(); css.parseCSS("a {color: #0000FF;} a:hover {text-decoration: underline;}"); stepText.styleSheet = css; stepText.htmlText = textToUse; stepText.visible = true; Here is a fragment of the html text (with newlines and exrta whitespace added to improve readability - originally it was one long line): <textformat indent="-37" blockindent="37" > <img src="media/interface/level-1-bullets/solid-circle.png" align="left" hspace="8" vspace="1"/> American Dental Association. (n.d.). <i>Cleaning your teeth and gums (oral hygiene)</i>. Retrieved 11/24/08, from <a href="http://www.ada.org/public/topics/cleaning_faq.asp" target="_blank">http://www.ada.org/public/topics/cleaning_faq.asp </a> </textformat> <br/> As it turns out, the text field is of a width such that it wraps and the second line starts with "Retrieved 11/24/08". Clicking on the hyper link causes this particular line to be indented. Subsequent paragraphs are not affected. ASIDE: The image is a list bullet about 37 pixels wide. (I used images instead of li tags because Flash does not allow nested lists, so I faked it using a series of images with varying amounts of whitespace to simulate three levels of indentation.) IDEA: I was thinking of changing all hyperlinks to use "event:" as the URL protocol, which causes a TextEvent.LINK event to be triggered instead of following the link. Then I would have to open the browser in a second call. I could use this event handler to set the html text to itself, which might clear the problem. (When I switch pages in my application and then come back to the page, everything is OKAY again.) PROBLEM: If I use the "event:" protocol and user tries the right-mouse button click, they will get an error, or so I am told. (See http://www.blog.lessrain.com/as3-texteventlink-and-contextmenu-incompatibilities/ ) I do not like this trade-off.

    Read the article

  • ajaxSubmit and Other Code. Can someone help me determine what this code is doing?

    - by Matt Dawdy
    I've inherited some code that I need to debug. It isn't working at present. My task is to get it to work. No other requirements have been given to me. No, this isn't homework, this is a maintenance nightmare job. ASP.Net (framework 3.5), C#, jQury 1.4.2. This project makes heavy use of jQuery and AJAX. There is a drop down on a page that, when an item is chosen, is supposed to add that item (it's a user) to an object in the database. To accomplish this, the previous programmer first, on page load, dynamically loads the entire page through AJAX. To do this, he's got 5 div's, and each one is loaded from a jquery call to a different full page in the website. Somehow, the HTML and BODY and all the other stuff is stripped out and the contents of the div are loaded with the content of the aspx page. Which seems incredibly wrong to me since it relies on the browser to magically strip out html, head, body, form tags and merge with the existing html head body form tags. Also, as the "content" page is returned as a string, the previous programmer has this code running on it before it is appended to the div: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } When the dropdown itself fires it's onchange event, here is the code that gets fired: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } This blows up in IE, with the message that "Microsoft JScript runtime error: Object doesn't support this property or method" -- which, I think, has to be that $(form).ajaxSubmit method doesn't exist. What is this code really trying to do? I am so turned around right now that I think my only option is to scrap everything and start over. But I'd rather not do that unless necessary. Is this code good? Is it working against .Net, and is that why we are having issues?

    Read the article

  • How can I embed a conditional comment for IE with innerHTML?

    - by Samuel Charpentier
    Ok so I want to conditionally add this line of code; <!--[if ! IE]> <embed src="logo.svg" type="image/svg+xml" /> <![endif]--> Using: document.getElementById("logo") .innerHTML='...'; In a if()/else() statement and it don't write it! If i get rid of the selective comment ( <!--[if ! IE]><![endif]-->) and only put the SVG ( <embed src="logo.svg" type="image/svg+xml" /> ) it work! what should I do? I found a way around but i think in the Android browser the thing will pop up twice. here's what I've done ( and its Validated stuff!); <!DOCTYPE html> <html> <head> <META CHARSET="UTF-8"> <title>SVG Test</title> <script type="text/javascript"> //<![CDATA[ onload=function() { var ua = navigator.userAgent.toLowerCase(); var isAndroid = ua.indexOf("android") > -1; //&& ua.indexOf("mobile"); if(isAndroid) { document.getElementById("logo").innerHTML='<img src="fin_palais.png"/>'; } } //]]> </script> </head> <body> <div id="logo"> <!--[if lt IE 9]> <img src="fin_palais.png"/> <![endif]--> <!--[if gte IE 9]><!--> <embed src="fin_palais.svg" type="image/svg+xml" /> <!--<![endif]--> </div> </body>

    Read the article

  • Why the parent page get refreshed when I click the link to open thickbox-styled form?

    - by user333205
    Hi, all: I'm using Thickbox 3.1 to show signup form. The form content comes from jquery ajax post. The jquery lib is of version 1.4.2. I placed a "signup" link into a div area, which is a part of my other large pages, and the whole content of that div area is ajax+posted from my server. To make thickbox can work in my above arangement, I have modified the thickbox code a little like that: //add thickbox to href & area elements that have a class of .thickbox function tb_init(domChunk){ $(domChunk).live('click', function(){ var t = this.title || this.name || null; var a = this.href || this.alt; var g = this.rel || false; tb_show(t,a,g); this.blur(); return false; });} This modification is the only change against the original version. Beacause the "signup" link is placed in ajaxed content, so I Use live instead of binding the click event directly. When I tested on my pc, the thickbox works well. I can see the signup form quickly, without feeling the content of the parent page(here, is the other large pages) get refreshed. But after transmiting my site files into VHost, when I click the "signup" link, the signup form get presented very slowly. The large pages get refreshed evidently, because the borwser(ie6) are reloading images from server incessantly. These images are set as background images in CSS files. I think that's because the slow connection of network. But why the parent pages get refreshed? and why the browser reloads those images one more time? Havn't those images been placed in local computer's disk? Is there one way to stop that reloadding? Because the signup form can't get displayed sometimes due to slow connection of network. To verified the question, you can access http://www.juliantec.info/track-the-source.html and click the second link in left grey area, that is the "signup" link mentioned above. Thinks!

    Read the article

  • CSS selectors : should I minimise my use of the class attribute in the HTML or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It also make my .css file a little harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • Issues in Ajax based applications

    - by Sinuhe
    I'm very interested in developing Ajax based applications. This is, loading almost all of the content of the application via XMLHttpRequest, instead of only some combos and widgets. But if I try to do this form scratch, soon I find some problems without an easy solution. I wonder if there is some framework (both client and server side) to deal with this issues. As far as I know, there isn't (but I've searched mainly in Java world). So I am seriously thinking of doing my own framework, at least for my projects. Therefore, in this question I ask for several things. First, the possible problems of an ajax based development. Then, I'm looking for some framework or utility in order to deal with them. Finally, if there is no framework available, what features must it have. Here are the issues I thought: 1 - JavaScript must be enabled. Security paranoia isn't the only problem: a lot of mobile devices couldn't use the application, too. 2 - Sometimes you need to update more than one DIV (e.g. main content, menu and breadcrumbs). 3 - Unknown response type: when you make an Ajax call, you set the callback function too, usually specifying if expected response is a javascript object or in which DIV put the result. But this fails when you get another type of response: for example when the session has expired and the user must log in again. 4 - Browser's refresh, back and forward buttons can be a real pain. User will expect different behaviors depending on the situation. 5 - When search engines indexes a site, only follow links. Thus, content load by Ajax won't "exist" for who doesn't know about it yet. 6 - Users can ask for open a link in a different window/tab. 7 - Address bar doesn't show the "real" page you are in. So, you can't copy the location and send it to a friend or bookmark the page. 8 - If you want to monetize the site, you can put some advertisings. As you don't refresh entire page and you want to change the ad after some time, you have to refresh only the DIV where the ad is. But this can violate the Terms and Conditions of your ad service. In fact, it can go against AdSense TOS. 9 - When you refresh an entire page, all JavaScript gets "cleaned". But in Ajax calls, all JavaScript objects will remain. 10 - You can't easily change your CSS properties.

    Read the article

  • Use HTTP PUT to create new cache (ehCache) running on the same Tomcat?

    - by socal_javaguy
    I am trying to send a HTTP PUT (in order to create a new cache and populate it with my generated JSON) to ehCache using my webservice which is on the same local tomcat instance. Am new to RESTful Web Services and am using JDK 1.6, Tomcat 7, ehCache, and JSON. I have my POJOs defined like this: Person POJO: import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Person { private String firstName; private String lastName; private List<House> houses; // Getters & Setters } House POJO: import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement public class House { private String address; private String city; private String state; // Getters & Setters } Using a PersonUtil class, I hardcoded the POJOs as follows: public class PersonUtil { public static Person getPerson() { Person person = new Person(); person.setFirstName("John"); person.setLastName("Doe"); List<House> houses = new ArrayList<House>(); House house = new House(); house.setAddress("1234 Elm Street"); house.setCity("Anytown"); house.setState("Maine"); houses.add(house); person.setHouses(houses); return person; } } Am able to create a JSON response per a GET request: @Path("") public class MyWebService{ @GET @Produces(MediaType.APPLICATION_JSON) public Person getPerson() { return PersonUtil.getPerson(); } } When deploying the war to tomcat and pointing the browser to http://localhost:8080/personservice/ Generated JSON: { "firstName" : "John", "lastName" : "Doe", "houses": [ { "address" : "1234 Elmstreet", "city" : "Anytown", "state" : "Maine" } ] } So far, so good, however, I have a different app which is running on the same tomcat instance (and has support for REST): http://localhost:8080/ehcache/rest/ While tomcat is running, I can issue a PUT like this: echo "Hello World" | curl -S -T - http://localhost:8080/ehcache/rest/hello/1 When I "GET" it like this: curl http://localhost:8080/ehcache/rest/hello/1 Will yield: Hello World What I need to do is create a POST which will put my entire Person generated JSON and create a new cache: http://localhost:8080/ehcache/rest/person And when I do a "GET" on this previous URL, it should look like this: { "firstName" : "John", "lastName" : "Doe", "houses": [ { "address" : "1234 Elmstreet", "city" : "Anytown", "state" : "Maine" } ] } So, far, this is what my PUT looks like: @PUT @Path("/ehcache/rest/person") @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public Response createCache() { ResponseBuilder response = Response.ok(PersonUtil.getPerson(), MediaType.APPLICATION_JSON); return response.build(); } Question(s): (1) Is this the correct way to write the PUT? (2) What should I write inside the createCache() method to have it PUT my generated JSON into: http://localhost:8080/ehcache/rest/person (3) What would the command line CURL comment look like to use the PUT? Thanks for taking the time to read this...

    Read the article

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >