Search Results

Search found 14294 results on 572 pages for 'browser modes'.

Page 512/572 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • fix needed for bug in TextField/Text

    - by Mark
    Sort of a complicated scenario - just curious if anyone else could come up with something: I have a Text control and when I scroll it and stop the scroll with the cursor over some text that has a url, the cursor doesn't revert to a hand, and also flash player starts acting as if a selection is being made from the last cursor position. So IOW a bonafide bug in flash as far as I can determine. The above probably wasn't completely clear so let me elaborate. If you grab a scrollbar thumb and start moving it up and down, you don't actually have to keep the mouse pointer on the thumb while doing so. When you stop the scroll, the mouse pointer could be outside the browser window, inside your flash application, but not currently on the scroll bar thumb, or wherever. The previously mentioned bug occurs when you stop the scroll with the mouse pointer positioned over text with an html anchor (a hyperlink). At that point the cursor enters into some state of limbo, and doesn't show the url hand pointer, and furthermore acts as if some text selection is taking place from the last cursor position prior to the scroll. So the question would be, what sort of event could I simulate in code to jolt flash out of this erroneous state it is in. And furthermore in what event could I perform this simulated event (given that for example there is no AS3 event to signal the end of a scroll.) To be clear, the Text control in question is on a canvas, and that canvas (call it A) is on another canvas which actually owns the scrollbar, and scrolling takes place by changing the scrollRect of canvas A.

    Read the article

  • Optimize SQL connection?

    - by user1484035
    I am building a multi-page web project in HTML and Javascript that is constantly reading from AND writing to an SQL database. I can connect to the database and successfully run my project with this type of connection. var connection = new ActiveXObject("ADODB.Connection") ; var connectionstring="Data Source=<server>;Initial Catalog=<catalog>;User ID=<user>; Password=<password>;Provider=SQLOLEDB"; connection.Open(connectionstring); var rs = new ActiveXObject("ADODB.Recordset"); rs.Open("SELECT * FROM table", connection); rs.MoveFirst while(!rs.eof) { document.write(rs.fields(1)); rs.movenext; } rs.close; connection.close; Works great and runs fine. BUT, the first 5 lines (from var connection = to var rs =) causes the whole browser to freeze for a few seconds while it establishes the connection. I need to speed that up since I am constantly connecting to the database throughout my project. Is there a more effective way of connecting to a SQL database? or is my computer just bad and this should run faster?

    Read the article

  • Different approaches for finding users within Active Directory

    - by EvilDr
    I'm a newbie to AD programming, but after a couple of weeks of research have found the following three ways to search for users in Active Directory using the account name as the search parameter: Option 1 - FindByIdentity Dim ctx As New PrincipalContext(ContextType.Domain, Environment.MachineName) Dim u As UserPrincipal = UserPrincipal.FindByIdentity(ctx, IdentityType.SamAccountName, "MYDOMAIN\Administrator") If u Is Nothing Then Trace.Warn("No user found.") Else Trace.Warn("Name=" & u.Name) Trace.Warn("DisplayName=" & u.DisplayName) Trace.Warn("DistinguishedName=" & u.DistinguishedName) Trace.Warn("EmployeeId=" & u.EmployeeId) Trace.Warn("EmailAddress=" & u.EmailAddress) End If Option 2 - DirectorySearcher Dim connPath As String = "LDAP://" & Environment.MachineName Dim de As New DirectoryEntry(connPath) Dim ds As New DirectorySearcher(de) ds.Filter = String.Format("(&(objectClass=user)(anr={0}))", Split(User.Identity.Name, "\")(1)) ds.PropertiesToLoad.Add("name") ds.PropertiesToLoad.Add("displayName") ds.PropertiesToLoad.Add("distinguishedName") ds.PropertiesToLoad.Add("employeeId") ds.PropertiesToLoad.Add("mail") Dim src As SearchResult = ds.FindOne() If src Is Nothing Then Trace.Warn("No user found.") Else For Each propertyKey As String In src.Properties.PropertyNames Dim valueCollection As ResultPropertyValueCollection = src.Properties(propertyKey) For Each propertyValue As Object In valueCollection Trace.Warn(propertyKey & "=" & propertyValue.ToString) Next Next End If Option 3 - PrincipalSearcher Dim ctx2 As New PrincipalContext(ContextType.Domain, Environment.MachineName) Dim sp As New UserPrincipal(ctx2) sp.SamAccountName = "MYDOMAIN\Administrator" Dim s As New PrincipalSearcher s.QueryFilter = sp Dim p2 As UserPrincipal = s.FindOne() If p2 Is Nothing Then Trace.Warn("No user found.") Else Trace.Warn(p2.Name) Trace.Warn(p2.DisplayName) Trace.Warn(p2.DistinguishedName) Trace.Warn(p2.EmployeeId) Trace.Warn(p2.EmailAddress) End If All three of these methods return the same results, but I was wondering if any particular method is better or worse than the others? Option 1 or 3 seem to be the best as they provide strongly-typed property names, but I might be wrong? My overall objective is to find a single user within AD based on the user principal value passed via the web browser when using Windows Authentication on a site (e.g. "MYDOMAIN\MyUserAccountName")

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • Avoid slowdowns while using off-site database

    - by Anders Holmström
    The basic layout of my problem is this: Website (ASP.NET/C#) hosted at a dedicated hosting company (location 1) Company database (SQL Server) with records of relevant data (location 2). Location 1 & 2 connected through VPN. Customer visiting the website and wanting to pull data from the company database. No possibility of changing the server locations or layout (i.e. moving the website to an in-office server isn't possible). What I want to do is figure out the best way to handle the data acces in this case, minimizing the need for time-expensive database calls over the VPN. The first idea I'm getting is this: When a user enters the section of the website needing the DB data, you pull all the needed tables from the database into a in-memory dataset. All subsequent views/updates to the data is done on this dataset. When the user leaves (logout, session timeout, browser closed etc) the dataset gets sent to the SQL server. I'm not sure if this is a realistic solution, and it obviously has some problems. If two web visitors are performing updates on the same data, the one finishing up last will have their changes overwriting the first ones. There's also no way of knowing you have the latest data (i.e. if a customer pulls som info on their projects and we update this info while they are viewing them, they won't see these changes PLUS the above overwriting issue will arise). The other solution would be to somehow aggregate database calls and make sure they only happen when you need them, e.g. during data updates but not during data views. But then again the longer a pause between these refreshing DB calls, the bigger a chance that the data view is out of date as per the problem described above. Any input on the above or some fresh ideas would be most welcome.

    Read the article

  • xsl defining in xml

    - by aditya parikh
    My first few lines in movies.xml are as follows : <?xml version="1.0" encoding="ISO-8859-1"?> <?xml-stylesheet type="text/xsl" href="movies_style.xsl"?> <movies xmlns="http://www.w3schools.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3schools.com file:///B:/USC/Academic/DBMS/HWS/no3/movie_sch.xsd"> and first few lines in movies_style.xsl are as follows : <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fo="http://www.w3.org/1999/XSL/Format"> Problem is if remove schema file linking from movies.xml file and keep tag only as <movies> then proper styled table is shown as output else nothing is displayed in browser and error is displayed in console as: "Unsafe attempt to load URL file:///B:/USC/Academic/DBMS/HWS/no3/movies_style.xsl from frame with URL file:///B:/USC/Academic/DBMS/HWS/no3/movies.xml. Domains, protocols and ports must match." Looks like some namespace mistake. Can anyone point out exactly what ?

    Read the article

  • GWT with JPA - no persistence provider...

    - by meliniak
    GWT with JPA There are two projects in my eclipse workspace, let's name them: -JPAProject -GWTProject JPAProject contains JPA configuration stuff (persistence.xml, entity classes and so on). GWTProject is an examplary GWT project (taken from official GWT tutorial). Both projects work fine alone. That is, I can create EMF (EntityManagerFactory) in JPAProject and get entities from the database. GWTProject works fine too, I can run it, fill the field text in the browser and get the response. My goal is to call JPAProject from GWTProject to get entities. But the problem is that when calling DAO, I get the following exception: [WARN] Server class 'com.emergit.service.dao.profile.ProfileDaoService' could not be found in the web app, but was found on the system classpath [WARN] Adding classpath entry 'file:/home/maliniak/workspace/emergit/build/classes/' to the web app classpath for this session [WARN] /gwttest/greet javax.persistence.PersistenceException: No Persistence provider for EntityManager named emergitPU at javax.persistence.Persistence.createEntityManagerFactory(Unknown Source) at javax.persistence.Persistence.createEntityManagerFactory(Unknown Source) at com.emergit.service.dao.profile.JpaProfileDaoService.<init>(JpaProfileDaoService.java:19) at pl.maliniak.server.GreetingServiceImpl.<init>(GreetingServiceImpl.java:21) . . . at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) [ERROR] 500 - POST /gwttest/greet (127.0.0.1) 3812 bytes I guess that the warnings at the beginning can be omitted for now. Do you have any ideas? I guess I am missing some basic point. All hints are highly apprecieable.

    Read the article

  • Wasteful Ajax Page Loading

    - by Matt Dawdy
    I've started a new job, and the portion of the project I'm working has a very odd structure. Every pages is a .Net aspx page, and it loads just fine, but nothing is really done at load time. Everything is really loaded from a jquery document.onready handler. What is even more...interesting...is that the onready handler calls some ajax calls that drop entire .aspx pages into divs on the page, but first it strips out several parts of the the returned page. This is the "magic" script the previous programmer ran on all the returned html from his ajax calls: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } He then intercepts any kind of form postback and runs his own form submission function: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } Am I way offbase in thinking that this is less than optimal? I mean, how does a browser take out the html and head and other tags that don't have anything to do with what you are really trying to drop into that div? Also, he's returning things like asp:gridview controls, and the associate viewstate, which can be quite large if his dataset is big. Has anyone seen this before?

    Read the article

  • Why doesn't this external JavaScript load?

    - by Ben Herila
    I've been futzing with this for hours trying to figure out why codemirror.js won't load in any browser other than Firefox. Any ideas? index.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <title></title> <script src="CodeMirror/js/codemirror.js"></script> <link href="Styles/Style.css" rel="stylesheet" type="text/css" /> <link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/base/jquery-ui.css" rel="stylesheet" type="text/css" /> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/jquery-ui.min.js"></script> <script type="text/javascript"> $(document).ready(function () { $('#container-1').tabs(); }); </script> <style type="text/css"> /* (some css) */ </style> </head><body> <!-- (some stuff) --> </body></html> CodeMirror/js/codemirror.js alert("LOL");

    Read the article

  • HTML5 audio with PHP script does not work on iPad/Iphone

    - by saulob
    Ok, I'm trying to play an HTML audio code on iPad but does not work. I created one PHP script to send to the MP3 request to the HTML5 audio code mp3_file_player.php?n=mp3file.mp3 The player is here: http://www.avault.com/news/podcast-news/john-romero-podcast-episode-80/ You will see that works on every HTML5 supported browser even on my iPod Touch. But does not work on iPad/iPhone, even on Safari on Mac OSX (I tried on Safari/Windows, worked fine) This is my PHP code: header("X-Powered-By: "); header("Accept-Ranges: bytes"); header("Content-Length: ". (string)(filesize($episode_filename)) .""); header("Content-type: audio/mpeg"); readfile($episode_filename); exit(); Everything works fine, the MP3 has the same headers like reading the mp3 directly. HTTP Headers from direct file access: (Status-Line) HTTP/1.1 200 OK Date Mon, 31 May 2010 20:27:31 GMT Server Apache/2.2.9 Last-Modified Wed, 26 May 2010 13:39:19 GMT Etag "dac0039-41d91f8-4877f669cefc0" Accept-Ranges bytes Content-Length 50656162 Content-Range bytes 18390614-69046775/69046776 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type audio/mpeg HTTP Header from my PHP script: (Status-Line) HTTP/1.1 200 OK Date Mon, 31 May 2010 20:27:08 GMT Server Apache/2.2.9 Accept-Ranges bytes Content-Length 69046776 Keep-Alive timeout=15, max=100 Connection Keep-Alive Content-Type audio/mpeg The only thing different it's the Content-Range, I even tried to add it, but if I use it the player will not work on my Ipod Touch. So I removed. Thank you very much.

    Read the article

  • .NET MVC: How to fix Visual Studio's lack of awareness of CSS classes in partial views?

    - by Mega Matt
    Hi all, This has been sort of an annoyance for me for a while. I make pretty heavy use of partial views in MVC, and am using Visual Studio 2008 to develop. The problem is that when I give html elements a class in a partial view (<div class="someClass">), it will underline them in green like it doesn't know what they are. I realize this is because I'm in a partial view, and haven't put link tags anywhere in that file for it to know where the CSS is (the link tags are in the main view that renders the partial view). The CSS still works fine on my site because the browser will render all views as one long html page anyway, but it's really annoying to look through my partial views and see all of my classes underlined in green. Is there a way that I can still tell Visual Studio that those classes exist somewhere, from the partial view? I figured there has to be a way to let it know, but am not sure what it is. Maybe a way to import the stylesheets from the parent view? Thanks for your help.

    Read the article

  • Rotating png images using css in IE

    - by Ernest Shulikovski
    Here is a mockup for something called "Diversity Disc": http://diversity.iest.pl/ It is just three disks that you can rotate and read "results". I am using there just four png images, and rotate using jQuery f.i: $.fn.rleft = function() { return this.animate({ rotate: '-=45deg' }); }; It works not so bad in most new browser. But in all versions of IE things go terribly wrong. There is problem with rotation, and with png: after rotation, there is happening something very ugly with alpha transparency of those images. So my question is, is this possible to make it work in IE 8 and 7 (and, more or less IE 6?). If no I will be forced to order it in Flash. But I would like first to try to do it using just css and javascript (svg?). So what I am doing wrong? Do you have any tips, for using different technology, or js library? Thank you in advance for any answers.

    Read the article

  • Increment the number of times an article has been read

    - by r.sendecky
    I have a situation where I need to increase the number of time article has been read. Once someone opens an article it should be reflected in the database by incrementing number of reads by one. Simple. Sending POST request to the server increments the number of reads by one. The article in question is supplied via URL parameter. Doing it manually by typing the URL in a browser works as expected. So server side is not at fault. My problems start with the javascript side of it or rather jquery. I hook the event to the article link. So every time a user clicks on the article link it increments the number of reads like so: $('#list-articles .article-link').click(function(e){ var oid = $(this).parent().parent().attr('data-oid').toString(); //Get the article id $.post( "/articles/viewed/" + oid ); }); Now this does not work! Number is not increased. I don't prevent default action since I need the link to actually open and display the article. Now if I put an alert right after the post like this: $('#list-articles .article-link').click(function(e){ var oid = $(this).parent().parent().attr('data-oid').toString(); //Get the article id $.post( "/articles/viewed/" + oid ); alert(oid); }); This variant works. After I dismiss the alert window, the number is incremented. Why is this so?? How can I fix this to actually work without the alert event present?

    Read the article

  • Creating collaborative whiteboard drawing application

    - by Steven Sproat
    I have my own drawing program in place, with a variety of "drawing tools" such as Pen, Eraser, Rectangle, Circle, Select, Text etc. It's made with Python and wxPython. Each tool mentioned above is a class, which all have polymorphic methods, such as left_down(), mouse_motion(), hit_test() etc. The program manages a list of all drawn shapes -- when a user has drawn a shape, it's added to the list. This is used to manage undo/redo operations too. So, I have a decent codebase that I can hook collaborative drawing into. Each shape could be changed to know its owner -- the user who drew it, and to only allow delete/move/rescale operations to be performed on shapes owned by one person. I'm just wondering the best way to develop this. One person in the "session" will have to act as the server, I have no money to offer free central servers. Somehow users will need a way to connect to servers, meaning some kind of "discover servers" browser...or something. How do I broadcast changes made to the application? Drawing in realtime and broadcasting a message on each mouse motion event would be costly in terms of performance and things get worse the more users there are at a given time. Any ideas are welcome, I'm not too sure where to begin with developing this (or even how to test it)

    Read the article

  • I'm confused, how do I control cache so my clients can see website edits.

    - by Jared Christensen
    I host about 10 websites for clients. Every so often a client will ask for an update to their website. It may be a simple image change, new PDF or a simple text change. I make the change and then send them a link to the web page with the update. About an hour later I will get an email back from the client telling me they still see the old page. I will then explaining to them how to empty their browsers cache. What I'm trying to figure out is if there is a way I can tell their browser that I made an update to the website and that it should reload the page and update the cache. I thought about trying a meta tag but I read that they are not very reliable. Also I would still like the page to cache I just want to be able to clear it when I make an update. Is this possible? I'm an advanced front end web developer (HTML, CSS, Javascript) and know some PHP. Cache is just one of those things I don't really understand that well.

    Read the article

  • Simple problem with mod_rewrite in the Fat Free Framework

    - by ian
    I am trying to setup and learn the Fat Free Framework for PHP. http://fatfree.sourceforge.net/ It's is fairly simple to setup and I am running it on my machine using MAMP. I was able to get the 'hello world' example running just fin: require_once 'path/to/F3.php'; F3::route('GET /','home'); function home() { echo 'Hello, world!'; } F3::run(); But when I try to add in the second part, which has two routes: require_once 'F3/F3.php'; F3::route('GET /','home'); function home() { echo 'Hello, world!'; } F3::route('GET /about','about'); function about() { echo 'About Us.'; } F3::run(); I get a 404 error if I try the second URL: /about Not sure why one of the mod_rewrite commands would be working and not the other. Below is my .htaccess file: # Enable rewrite engine and route requests to framework RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-l RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php [L,QSA] # Disable ETags Header Unset ETag FileETag none # Default expires header if none specified (stay in browser cache for 7 days) <IfModule mod_expires.c> ExpiresActive On ExpiresDefault A604800 </IfModule>

    Read the article

  • jQuery Manual Resizable DIV

    - by JC
    Hi, I'm trying to create a resizable div without using jQuery's interface library. var myY = 0; var mouseDown = false; var originalHeight = 0; function resize(e){ if(mouseDown == true){ $("#cooldiv").height(originalHeight+e.pageY-myY); } } $(document).ready(function(){ $().mouseup(function(e){ myY = 0; mouseDown = false; originalHeight = 0; $().unbind("mousemove", resize); }); $("#resizeBar").mousedown(function(e){ myY = e.pageY; originalHeight = $("#cooldiv").height(); mouseDown = true; $().bind("mousemove", resize); }); }); ... <div id="cooldiv" style="width: 500px; height: 300px; background-color: #cccccc; position: relative;"> <div id="resizeBar" style="height: 10px; width: 500px; background-color: #aaaaaa; position: absolute; bottom: 0;"></div> </div> The first resize works fine(i.e. mousedown, mousemove then mouseup), but on subsequent (mousedown+mousemove)s, the browser attempts to drag the whole resizeBar div instead of properly resizing its parent container. On mouseup, the div then starts resizing "cooldiv" on mousemove without any mousedown required, until a further click of the mouse. These problems don't show up in Internet Explorer.

    Read the article

  • How to provide a temporary URL for custom domain in Wordpress multisite install?

    - by Milan Babuškov
    I have a website with Wordpress 3.0.4 installation, set up as multisite install. Some users register their blogs as something.mydomain.com and that works automatically. However, some users prefer to use their own domain names like something.com. This also works fine once they set up the CNAME record to point to my server. However, it takes 24-48 hours for that change to take effect. I'd like to be able to offer the user a temporary URL that would work out-of-the-box until the DNS changes are propagated, but I have not idea how to do it? For example: something.com should also be accessible as: something.tempdomain.com I have control over "tempdomain" DNS setup. I thought about replacing $_SERVER variables in index.php or .htaccess file when temporary domain is accessed, and this works for the first page load. However, all the links in generated page point to original domain which is not yet ready. UPDATE: I managed to get it working for the site itself by manipulating $_SERVER variables so Wordpress thinks it's creating a page for different site. I did this in index.php, so before any WP code is run I'm using ob_start and ob_get_contents later to get the page generated by Wordpress and then str_replace the links back to temporary domain. The problem I still have is the admin page. Even though the link says: http://site1.tempdomain.com/wp-admin when opened in browser it redirects to maindomain.com/wp-signup.php?new=site1.tempdomain I don't understand how WP detects that I supplied "fake" domain when $_SERVER vars are changed?

    Read the article

  • What do you call the highlight as you hover over an OPTION in a SELECT, and how can I "un"highlight

    - by devils-avacado
    I'm working on dropdowns in IE6 (SELECT with OPTIONs as children). I'm experiencing the problem where, in IE6, dropdown menus are truncated to the width of the SELECT element. I used this jquery to fix it: var css_expanded = { 'width' : 'auto', 'position' : 'absolute', 'left' : '564px', 'top' : '393px' } var css_off = { 'width' : '190px', 'position' : 'relative', 'left' : '0', 'top' : '0' } if($.browser.msie) { $('#dropdown') .bind('mouseover', function() { $(this).css(css_expanded); }) .bind('click', function() { $(this).toggleClass('clicked'); }) .bind('mouseout', function() { if (!$(this).hasClass('clicked')) { $(this).css(css_off); } }) .bind('change blur focusout', function() { $(this).removeClass('clicked') .css(css_off) }); }; It works fine in IE7+8, but in IE6, if the user clicks on a SELECT and does not click any OPTION but instead clicks somewhere else on the page, outside the SELECT (blur), the SELECT still has a "highlight", the same as if they were hovering over an OPTION with my mouse (in my case, a blue color), and the width is not restored until the user clicks outside the SELECT a second time. In other browsers, blur causes every OPTION to not be highlighted. Using JS, how can I de-highlight an option after the dropdown has lost focus.

    Read the article

  • Cross domain javascript to access localhost. Possible?

    - by Earlz
    Hello, for one reason or another I need for javascript to access a webserver on the localhost. This localhost webserver is under our control so we can have whatever software running in it. How would you do this? I've seen things like YQL but this accesses another domain from the internet. This kind of access causes a lot of problems with firewalls and such. So I want to access the same computer that the browser is running on. How would you do this with javascript and whatever software running on the localhost server? Also, the javascript is being run from an internet site. And the localhost server will not be running on the same port are the internet website is. Is this possible to do? I know about the cross-domain restrictions but I've also seen there are ways around them such as YQL. How does something like YQL work? How would you reimplement it?

    Read the article

  • How can I save an ASP.NET IFRAME from a custom entity's OnSave event, reliably?

    - by Forgotten Semicolon
    I have a custom ASP.NET solution deployed to the ISV directory of an MS Dynamics CRM 4.0 application. We have created a custom entity, whose data entry requires more dynamism than is possible through the form builder within CRM. To accomplish this, we have created an ASP.NET form surfaced through an IFRAME on the entity's main form. Here is how the saving functionality is currently laid out: There is an ASP.NET button control on the page that saves the entity when clicked. This button is hidden by CSS. The button click event is triggered from the CRM OnSave javascript event. The event fires and the entity is saved. Here are the issues: When the app pool is recycled, the first save (or few) may: not save be delayed (ie. the interface doesn't show an update, until the page has been refreshed after a few sec) Save and Close/New may not save For issue 1.1 and 2, what seems to be happening is that while the save event is fired for the custom ASP.NET page, the browser has moved on/refreshed the page, invalidating the request to the server. This results in the save not actually completing. Right now, this is mitigated with a kludge javascript delay loop for a few seconds after calling the button save event, inside the entity OnSave event. Is there any way to have the OnSave event wait for a callback from the IFRAME?

    Read the article

  • Rails: Obfuscating Image URLs on Amazon S3? (security concern)

    - by neezer
    To make a long explanation short, suffice it to say that my Rails app allows users to upload images to the app that they will want to keep in the app (meaning, no hotlinking). So I'm trying to come up with a way to obfuscate the image URLs so that the address of the image depends on whether or not that user is logged in to the site, so if anyone tried hotlinking to the image, they would get a 401 access denied error. I was thinking that if I could route the request through a controller, I could re-use a lot of the authorization I've already built into my app, but I'm stuck there. What I'd like is for my images to be accessible through a URL to one of my controllers, like: http://railsapp.com/images/obfuscated?member_id=1234&pic_id=7890 If the user where to right-click on the image displayed on the website and select "Copy Address", then past it in, it would be the SAME url (as in, wouldn't betray where the image is actually hosted). The actual image would be living on a URL like this: http://s3.amazonaws.com/s3username/assets/member_id/pic_id.extension Is this possible to accomplish? Perhaps using Rails' render method? Or something else? I know it's possible for PHP to return the correct headers to make the browser think it's an image, but I don't know how to do this in Rails... UPDATE: I want all users of the app to be able to view the images if and ONLY if they are currently logged on to the site. If the user does not have a currently active session on the site, accessing the images directly should yield a generic image, or an error message.

    Read the article

  • How To Call Javascript In Ajax Response? IE: Close a form div upon success...

    - by B.Gordon
    I have a form that when you submit it, it sends the data for validation to another php script via ajax. Validation errors are echo'd back in a div in my form. A success message also is returned if validation passes. The problem is that the form is still displayed after submit and successful validation. I want to hid the div after success. So, I wrote this simple CSS method which works fine when called from the page the form is displayed on. The problem is that I cannot seem to call the hide script via returned code. I can return html like echo "<p>Thanks, your form passed validation and is being sent</p>"; So I assumed I could simply echo another line after that echo "window.onload=displayDiv()"; inside script tags (which I cannot get to display here)... and that it would hide the form div. It does not work. I am assuming that the problem is that the javascript is being returned incorrectly and not being interpreted by the browser... How can I invoke my 'hide' script on the page via returned data from my validation script? I can echo back text but the script call is ineffective. Thanks! This is the script on the page with the form... I can call it to show/hide with something like onclick="displayDiv()" while on the form but I don't want the user to invoke this... it has be called as the result of a successful validation when I write the results back to the div... function displayDiv() { var divstyle = new String(); divstyle = document.getElementById("myForm").style.display; if(divstyle.toLowerCase()=="block" || divstyle == "") { document.getElementById("myForm").style.display = "none"; } else { document.getElementById("myForm").style.display = "block"; } } PS: I am using the mootools.js library for the form validation if this matters for the syntax..

    Read the article

  • S.redirectTo leads always to a blank screen

    - by Jaime Ocampo
    I am now playing a little bit with lift (2.8.0), and all the features in LiftRules work as intended. But I haven't been able to use S.redirectTo at all. I always ends with a blank screen, no matter what. No error messages at all! As an example, I have the following form: ... <lift:logIn.logInForm form="post"> <p><login:name /></p> <p><login:password /></p> <p><login:submit /></p> </lift:logIn.logInForm> ... And the code is: object LogIn extends helper.LogHelper { ... def logInForm(in: NodeSeq): NodeSeq = { var name = "" var password = "" def login() = { logger.info("name: " + name) logger.info("password: " + password) if (name == "test1") S.redirectTo("/example") if (name == "test2") S.redirectTo("/example.html") if (name == "test3") S.redirectTo("example.html") S.redirectTo("/") } bind("login", in, "name" -> SHtml.text(name, name = _), "password" -> SHtml.password(password, password = _), "submit" -> SHtml.submit("Login", login)) } } The method 'login' is invoked, I can check that in the log information. But as I said, no matter which name I enter, I always end with a blank screen, although 'examples.html' is available when being accessed directly in the browser. How should I invoke S.redirectoTo in order to navigate to 'examples.html'? Also, why don't I receive an error message (I am logging at a debug level)? I think all the configuration in Boot is correct, since all LitRules examples (statelessRewrite, dispatch, viewDispatch, snippets) work fine.

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >