Search Results

Search found 13797 results on 552 pages for 'browser madness'.

Page 408/552 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Please recommend good books for telemetry / SCADA system design & programming

    - by Mawg
    I am looking at several projects, all with roughly the same fucntionality. Some instruments collect some data (or control some functionality). They commmunicate by Internet (Etehrnet/wifi/GPRS/sasatellite) with a databse server which stores the measurements and provides a browser based means of qeurying the data, prodcuing reports, etc (and possibly also allows control of the remote equipment). Can anyone recommend a good book describing an approach to developing such a software architecture, keeping it generic, which tools, languages. test methods, etc to use? (note to self: ask a similar question about possible existing frameworks)

    Read the article

  • GWT - RichTextArea - ScrollTo

    - by Yanick Rochon
    If I have an RichTextArea like this : RichTextArea rta = new RichTextArea(); rta.setHTML("<p id=\"foo\">Foo</p>....<p id=\"bar\">Bar</p>"); If I extend the RichTextArea class, how would be the proper way (cross-browser wise) to write a scrollTo() method? Ex: class RichTextAreaExt extends RichTextArea { ... public native void scrollTo(String element) /*-{ // the underlaying DOMElement is an iframe, so.... }-*/; ... } Thanks!

    Read the article

  • Hiding Text in ie7

    - by user356849
    So I have this text generated by a javascript plugin. <a class="className">Text</a> a.className { background: url(images/a-image.png) no-repeat; } But the "Text" shows on top of the image... Now... with any respectable web browser, I can use color: rgba(0,0,0,0); to solve the problem, but IE7 doesn't obey standards of any sort. Any ideas?

    Read the article

  • Javascript src starts with //?

    - by Chris
    I'm starting to see more and more script references show up like so: <script type="text/javascript" src="//somedomain.com/somescript.js"></script> Note the lack of http: at the beginning of the src attribute. It seems to work fine and avoids the messy requirement of detecting http vs https, but I've never actually seen this uri format referenced anywhere. Where did it come from? Is this behavior documented anywhere? Is it guaranteed to work in any browser?

    Read the article

  • Approaches to timing out sessions on a web app using AJAX autorefreshes

    - by Braintapper
    I'm writing a web application that autorefreshes data with an AJAX call at set intervals. Because it's doing that, server side user sessions never time out, since the last activity is refreshed with every ajax call. Are there good client side rules I could implement to time out the user? I.e. should I track mouse movements in the browser, etc., or should I point the AJAX calls to URLs that don't refresh the session? I like that my AJAX calls hit a session-enabled URL, because I can also validate that the user is logged in, etc. Any thoughts in terms of whether I should even bother timing out the users?

    Read the article

  • eclipse wont include/identify user .h files and .o files

    - by bks
    i'm new to eclipse CDT, and try to use an existing .o file that was supplied with the proper .h file. eclipse just wont include it in the compilation process. i tryed drag&drop to the eclipse project browser, and the files did show there, but no use, still "No such file: No such file or directory". i tryed defining the .o file in the: project properties tool settings MinGW C Linker Miscellaneous other objects. didn't work either. as for the header file, i did try a workaround: created a new file in the project and named it after the file i want to include as header, then copied the content. and yet, the compiler/linker didn't recognize the object file. perhaps you can help?thank you

    Read the article

  • stumped on jquery call inside chrome extension

    - by phil swenson
    In my chrome extension I call this: jsonPost = { email:"[email protected]", password:"demo", content: $('#selected_text').val(), notification_type: $('#notification_type').val(), name:$('#notification_name').val() } $.post('http://localhost:3000/api/create.json', jsonPost, function (data) { console.log("type of data = " + typeof(data)); console.log("data in function = " + data); } The data makes it to the server. But the response is lost, in the console ---type of data = String ---data in function = So for some reason I am not getting the response back. Works from the browser. I even tried doing a get against cnn.com and got no response. any ideas? thanks

    Read the article

  • VisualSVN Server + Trac Authentication Problems

    - by danscott
    I have Trac set up on my VisualSVN server (using Subversion authentication), however every time I navigate to the Trac home page after opening the browser, I get the basic authentication dialog asking me for my username/password. What I would like to do is have a login form in Trac, which would allow me to log in forever using cookies. I have tried installing the AccountManagerPlugin, but I am completely unsure of how to correctly set it up. (I am used to working with IIS on corporate intranets, so this is kind of alien to me) I have managed to bypass the basic authentication dialog by setting this in my httpd-custom.conf: AuthName "Trac" AuthType Basic AuthBasicProvider file AuthUserFile "E:/Repositories/htpasswd" #Require valid-user I have tried using SvnServePasswordStore as my password store but I do not know which of the files in the repository directory to point it at. Help would be appreciated!

    Read the article

  • Edit PDF online and save and form data to server

    - by Clowerweb
    Hello, I have some PDF documents which are being displayed in the browser, with some fields already being pre-populated from the database using iTextSharp (we are running Windows Server 2008, IIS 7, SQL Server 2008, and ASP.NET 2.0/2.5 with C#). Our clients need to be able to fill in the remaining fields and save the PDF to the server. I have considered the following possibilities: 1.) Somehow using iTextSharp to parse the form fields, grab all the form data and save it to the database on submit. 2.) Adding a submit button to the PDF itself using LiveCycle with some sort of JS click event to save the FDF/XFDF/XDP/XML data either to the database or to a flat file on the server. I am currently unsure as to what the best approach would be, what would work, or how to implement any of these possible solutions, so any help would be greatly appreciated. Thanks!

    Read the article

  • Good link checking tool?

    - by AP257
    Hi all Can anyone recommend a good, free link checker to check all pages within a domain? Ideally a browser add-on or a web app (otherwise something that runs on OSX). Crucially it needs to follow links recursively within a domain. Links outside the domain should be followed to a depth of 1, but not checked recursively. This is for the fairly common situation where you want to check all pages on your own site, but not evaluate the links on e.g. Google's homepage. I can't find anything suitable. Am I missing something? I've tried the Firefox LinkChecker add-on and the W3C link validator - neither seem to have the 'follow recursively within a domain' property, or am I being dumb? I know Xenu does this, but I don't run Windows.

    Read the article

  • How can I have a certain set of CSS properties affect only IE users?

    - by rowan
    definately one or the other, not one and the other if.... HTML doesnt have an else function.. or does it? could you please be so kind as to code it in your answer im a php newb but so far getting nice results! this one's got be buggered though. if browser = IE then css/ie.css else css/moz even a webkit 3rd option if you think its needed... thanks guys you're all marvelous. also, does anyone know of a full properties list for webkit transitions/css?d

    Read the article

  • Problem for opening jQuery dialog box in ie8

    - by user291247
    Hello, I am using jQuery dialog box and having problem to open it in ie8 in other browsers it will opens but having problem in ie8 only. Also I have one more problem some times ajax request is not working in my dialog box on any browser. code. // Dialog $('#login_div').dialog({ autoOpen: false, width: 600, buttons: { "Cancel": function() { $(this).dialog("close"); } } }); // Dialog Link - login_div $('#dialog_link').click(function(){ $('#login_div').dialog({ autoOpen: false }); //$('#login_div').show(); $('#login_div').dialog('open'); return false; });

    Read the article

  • jQuery drag and drop behavior with partially transparent image

    - by Aaron
    I'm trying to develop a drag-and-drop behavior based on the jQuery UI draggable behavior but am running into some road blocks. I want to be able to drag several images with transparent regions around a region of the screen. I want the user to be able to drag the image he clicks and not just whatever draggable div or PNG happens to be z-indexed on top. The below image is a screen grab from my test page. If I click the lower left region of the blue square through the red thing I should drag the square and not the red thing. The red thing is what gets dragged though because it is on top and the browser does not care about the transparency. My question is, how can I make it behave as expected in this situation and drag the square instead? Edit: Seems I can't attach images as a new user. See this URL for my example image: http://i42.tinypic.com/r1g4sk.png

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • WebClient.DownloadString() Not Producing Exact HTML

    - by Ryan Fuentes
    So here's the deal. I'm creating a spider bot for a website that scans all the product pages and records the product data. I'm using C# and the WebClient library to download the HTML string. The site I'm crawling must be specially made because the HTML that is received from WebClient.DownloadString() is different than the HTML that I get when I view the source of the HTML when visiting it on a browser. This seems intentional because the only info I can't get is the price. Does anyone know a workaround for this problem or can anyone explain what is happening? Thanks.

    Read the article

  • Android HTTP Connection

    - by Ubersoldat
    Can anybody tell my why this doesn't work in the Android emulator? From the browser I have access and the server is internal. All I can think of is that I'm missing some configuration on my app so it can access the network layer. try { InetAddress server = Inet4Address.getByName("thehost"); //Doesn't work either //or InetAddress server2 = Inet4Address.getByAddress(new String("192.168.1.30").getBytes()); if(server.isReachable(5000)){ Log.d(TAG, "Ping!"); } Socket clientsocket = new Socket(server, 8080); } catch (UnknownHostException e) { Log.e(TAG, "Server Not Found"); } catch (IOException e) { Log.e(TAG, "Couldn't open socket"); } Throws an UnknownHostException Thanks

    Read the article

  • Suddenly Facebook API stopped working on Windows Phone

    - by Juan Diego
    My code hasn't changed, it was working yesterday or so. I can oauth, get the token but then doing the following: WebClient wc = new WebClient(); wc.DownloadStringCompleted += result; wc.DownloadStringAsync(new Uri("https://graph.facebook.com/me&access_token=xxxTOKENxxx", UriKind.Absolute)); Returns a NotFound WebClient exception: "The remote server returned an error: NotFound." Strange thing is that when pasting that same url on Chrome or IE it does work(PC). Tried on Emulator and on 2 different real WP devices, even pasting the same url on the WP browser. Feels like facebook is rejecting Windows Phone for some reason? Anyone has an idea of what might be happening?

    Read the article

  • Callback function doesn't work when using getJSON

    - by asilloo
    Hi, This is the code that I am using, When I write the link into the browser (I.E. or Mozilla) it is working like (MyFunc({"memes":[{"source":"http://www.knall......), but when I try to run it as HTML file I have a error in status Bar. what is the problem?. Thanks <head> <style>img{ height: 100px; float: left; }</style> <script src="http://code.jquery.com/jquery-latest.js"></script> </head> <body> <div id="images"></div> <script>$.getJSON("http://tagthe.net/api/?url=http://www.knallgrau.at/en&view=json&callback=MyFunc",function(data){ alert(data); }); </script> </body>

    Read the article

  • Selenium WebDriver works but SLOW (Java)

    - by Chris
    Code: WebDriver driver = new FirefoxDriver(); driver.get("http://www.cnn.com"); File scrFile = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE); FileUtils.copyFile(scrFile, new File("c:\\test\\screenshot.png")); I am using Selenium WebDriver to take a screenshot of webpages. It runs great. However, from the time I hit run in eclipse to the time the screenshot shows up in my local drive is 7-10 seconds. Most of the latency seems to be launching Firefox. How can I speed up this process? Is there a way that I can use an already opened Firefox browser to save on opening a new one? Is this code somehow heavy? Details- Tried on CentOS box and Win7 box both using eclipse. myspeedtest.net shows 22Mbps down and 1 Mbps up.

    Read the article

  • Facebook Graph API shows different results in me/home

    - by elekatonio
    Hi, When I do a GET with my browser (already logged-in at Facebook): https://graph.facebook.com/me/home?access_token={token} the results are different than doing the same via a FB app using Facebook C# SDK. Specifically, what the API is not returning are feeds posted by other applications. Why can be this happening? Can't an application retrieve updates from other applications even if it has the read_stream permission? I even requested for additional permissions: read_stream,user_activities,friends_activities,friends_likes,user_likes,read_requests but nothing has changed. What I need is to get ALL and the same stories an user would see at his FB news feed.

    Read the article

  • Reason to use more cookies than just a session hash for authentication?

    - by dierre
    I usually hang out in a community using vBulletin as its bulletin board. I was looking at what this software saves as cookie in my browser. As you can see it saves 6 cookies. Amongst them, what I consider to be important for authentification are: ngivbsessionhash: hash of the current session ngivbpassword: hash of the password ngivbuserid: user's id Those are my assumptions of course. I don't know for sure if ngilastactivity and ngilastvisit are used for the same reason. My question is: why use all these cookie for authentication? My guess would be that maybe generating a session hash would be to easy so using the hashedpassword and userid adds security but what about cookie spoofing? I'm basically leaving on the client all fundamental informations. What do you think?

    Read the article

  • ShGetFileInfo called for directory oddity

    - by Axarydax
    Hello, I have a simple file browser and there I display files and folders, obtained by (for directory) SHFILEINFO info = new SHFILEINFO(); SHGetFileInfo(filename, FILE_ATTRIBUTE_DIRECTORY, ref info,Marshal.SizeOf(info), SHGFI_ICON | SHGFI_USEFILEATTRIBUTES | SHGFI_SMALLICON | SHGFI_ADDOVERLAYS); It works 100% fine, but I have noticed an oddity - if I try to obtain an icon for directory, but specify FILE_ATTRIBUTE_NORMAL instead of FILE_ATTRIBUTE_DIRECTORY but it does weird stuff for directories - normal folders have "unknown file type white paper" icons, recycle bin has VLC icon, etc. Directories under SVN have proper overlay, but base file icon (white sheet of paper). I understand that base icon for directory would now be the one of unknown file, but why do some folders have totally strange icon? Config.MSI has installer icon, recycle bin has VLC icon (wtf?!), etc. What does the shell function do with this parameters? Exactly what icon does it obtain? Again, this is not a problem, I'm just curious.

    Read the article

  • How to hide nested form from jQuery under IE8

    - by pduel
    An html segment with a div containing a form: <div class="hide"> Form header <form action='' method='post'> .... form content here </form> form footer </div> <script type="text/javascript"><!--// $(document).ready(function() { $('.hide').hide(); } //--></script> The jquery should hide the form, but does not do so under IE8. (version 8.0.60001) The form content gets hidden, as does the content within the class='hide' div but outside the form, but the form border continues to show, and retains its size. Does anybody have a workaround for this? jQuery is version 1.4.2 I tried to create a small problem demo in jsfiddle, but that site was not functional in the IE browser.

    Read the article

  • Why was the arguments.callee.caller property deprecated in JavaScript?

    - by pcorcoran
    Why was the arguments.callee.caller property deprecated in JavaScript? It was added and then deprecated in JavaScript, but it was omitted altogether by ECMAScript. Some browser (Mozilla, IE) have always supported it and don't have any plans on the map to remove support. Others (Safari, Opera) have adopted support for it, but support on older browsers is unreliable. Is there a good reason to put this valuable functionality in limbo? (Or alternately, is there a better way to grab a handle on the calling function?)

    Read the article

  • After HTTP GET request, the resulting string is cut-off (incomplete)

    - by Jayomat
    hi all, I'm making a http get request like this: try { HttpClient client = new DefaultHttpClient(); String getURL = "http://busspur02.aseag.de/bs.exe?SID=5FC39&ScreenX=1440&ScreenY=900&CMD=CR&Karten=true&DatumT="+day+"&DatumM="+month+"&DatumJ="+year+"&ZeitH="+hour+"&ZeitM="+min+"&Intervall=60&Suchen=(S)uchen&GT0=Aachen&T0=H&HT0="+start_from+"&GT1=Aachen&T0=H&HT1="+destination+""; HttpGet get = new HttpGet(getURL); HttpResponse responseGet = client.execute(get); HttpEntity resEntityGet = responseGet.getEntity(); if (resEntityGet != null) { //do something with the response Log.i("GET RESPONSE",EntityUtils.toString(resEntityGet)); } ........ It all works well... the only problem: the output from Log.i is cut-off... It's not the complete html page. If I make the same request in a browser, I get 3x the output in opposition to making the request in the emulator and using the above code.... what's wrong?

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >