Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 540/893 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How to solve JavaScript origin problem with an application and static file server

    - by recipriversexclusion
    In a system that I'm building I want to serve Static files (static HTML pages and a lot of images), and Dynamic XML generated by my servlet. The dynamic XML is generated from my database (through Hibernate) and I use Restlets to serve it in response to API calls. I want to create a static file server (e.g. Apache) so that this does not interfere with the dynamic server traffic. Currently both servers need to run on the same machine. I've never done something like this before and this is where I'm stuck: The static HTML pages contain JavaScript that makes API calls to the dynamic server. However, since the two servers operate on different ports, I get stuck with the same origin problem. How can this be solved? As a bonus, if you can point me to any resources that explain how to create such a static/dynamic content serving system, I'll be happy. Thanks!

    Read the article

  • C# Compiler should give warning but doesn't?

    - by Cristi Diaconescu
    Someone on my team tried fixing a 'variable not used' warning in an empty catch clause. try { ... } catch (Exception ex) { } - gives a warning about ex not being used. So far, so good. The fix was something like this: try { ... } catch (Exception ex) { string s = ex.Message; } Seeing this, I thought "Just great, so now the compiler will complain about s not being used." But it doesn't! There are no warnings on that piece of code and I can't figure out why. Any ideas? PS. I know catch-all clauses that mute exceptions are a bad thing, but that's a different topic. I also know the initial warning is better removed by doing something like this, that's not the point either. try { ... } catch (Exception) { } or try { ... } catch { }

    Read the article

  • How to: Save order of tabs when customizing tabs in UITabBarController

    - by Canada Dev
    I am having problems finding any other information than the docs for how to save the tab order for my UITabBarController, so that the user's customization is saved for next app launch. I have searched online, but have been unable to find any blog posts or articles that goes through the proper code for doing this. I realize I have to use the delegate methods for the UITabBarController (didEndCustomizingViewControllers:) but I am not sure how I best approach persistance in terms of saving the state of the order the user wants the tabs in. Can someone post some code, point me in the right direction or perhaps you have a link for something saved? :) Thanks

    Read the article

  • SQL Server 2008 full-text search doesn't find word in words?

    - by Martijn
    In the database I have a field with a .mht file. I want to use FTS to search in this document. I got this working, but I'm not satisfied with the result. For example (sorry it's in dutch, but I think you get my point) I will use 2 words: zieken and ziekenhuis. As you can see, the phrase 'zieken' is in the word 'ziekenhuis'. When I search on 'ziekenhuis' I get about 20 results. When I search on 'zieken' I get 7 results. How is this possible? I mean, why doesn't the FTS resturn the minimal results which I get from 'ziekenhuis'? Here's the query I use: SELECT DISTINCT d.DocID 'Id', d.Titel, (SELECT afbeeldinglokatie FROM tbl_Afbeelding WHERE soort = 'beleid') as Pic, 'belDoc' as DocType FROM docs d JOIN kpl_Document_Lokatie dl ON d.DocID = dl.DocID JOIN HandboekLokaties hb ON dl.LokatieID = hb.LokatieID WHERE hb.InstellingID = @instellingId AND ( FREETEXT(d.Doel, @searchstring) OR FREETEXT(d.Toepassingsgebied, @searchstring) OR FREETEXT(d.HtmlDocument, @searchstring) OR FREETEXT (d.extraTabblad, @searchstring) ) AND d.StatusID NOT IN( 1, 5)

    Read the article

  • Lablgtk2 Installation on Windows.

    - by Animesh
    Hi there people, I am currently relearning Ocaml and am in the need of a good editor. There is a new editor from OcamlForge: OCamlEditor http://ocamleditor.forge.ocamlcore.org/. Prerequisite for installation is Lablgtk2. Installing Lablgtk2 on windows is not straight forward and there is good instruction here: http://wwwfun.kurims.kyoto-u.ac.jp/soft/lsl/install-win32.txt I have completed the first two steps and in the third step, as warned, it is failing on the native code version. This is where I am left stranded. How do I check to see if the assembler is on my path? What am I missing here? Please help me move forward from this point. Sincere thanks. Animesh

    Read the article

  • In plain English, what are Django generic views?

    - by allyourcode
    The first two paragraphs of this page explain that generic views are supposed to make my life easier, less monotonous, and make me more attractive to women (I made up that last one): http://docs.djangoproject.com/en/dev/topics/generic-views/#topics-generic-views I'm all for improving my life, but what do generic views actually do? It seems like lots of buzzwords are being thrown around, which confuse more than they explain. Are generic views similar to scaffolding in Ruby on Rails? The last bullet point in the intro seems to indicate this. Is that an accurate statement?

    Read the article

  • Sql query - how to construct

    - by Max Malmgren
    Hi. I am working to implement a dataconnection between my C# application and an Sql Express database. Please bear in mind I have not worked with Sql queries before. I have the following relevant tables: ArticlesCommon ArticlesLocalized CategoryCommon CategoryLocalized where ArticlesCommon holds language independent information such as price, weight etc. This is the statement for now: SELECT * FROM ArticlesCommon INNER JOIN ArticlesLocalized ON ArticlesCommon.ID = ArticlesLocalized.ID WHERE ArticlesLocalized.Language = @language ORDER BY ArticlesCommon.DateAdded ArticlesCommon contains a category id for each row. Now, I want to use this to look up the localized information in CategoryLocalized and add it to the result, something like SELECT *, CategoryLocalized.Name as CategoryName. If I have gotten my point across, is this doable? Thank you.

    Read the article

  • prolog to solve grammar involving braces

    - by Abhilash Muthuraj
    I'm trying to solve DCG grammar in prolog and succeeded upto a point, i'm stuck in evaluating the expressions involving braces like these. expr( T, [’(’, 5, +, 4, ’)’, *, 7], []), expr(Z) --> num(Z). expr(Z) --> num(X), [+], expr(Y), {Z is X+Y}. expr(Z) --> num(X), [-], expr(Y), {Z is X-Y}. expr(Z) --> num(X), [*], expr(Y), {Z is X*Y}. num(D) --> [D], {number(D)}. eval(L, V, []) :- expr(V, L, []).

    Read the article

  • jquery to check when a someone starts typing in to a field

    - by Drew
    $('a#next').click(function() { var tags = $('input[name=tags]'); if(tags.val()==''){ tags.addClass('hightlight'); return false; }else{ tags.removeClass('hightlight'); $('#formcont').fadeIn('slow'); $('#next').hide('slow'); return false; } }); I would like the above code to fire the fadeIn as soon as somebody starts typing into the tags input. Can somebody tell me the correct way to do this or point me in the right direction? Thanks in advance EDIT here is the code to do it: $('input#tags').keypress(function() { $('#formcont').fadeIn('slow'); $('#next').hide('slow'); }); The only problem I've found is that my cursor no longer shows up in the text box. What am I doing wrong?

    Read the article

  • Compiler error when casting to function pointer

    - by detly
    I'm writing a bootloader for the PIC32MX, using HiTech's PICC32 compiler (similar to C90). At some point I need to jump to the real main routine, so somewhere in the bootloader I have void (*user_main) (void); user_main = (void (*) (void)) 0x9D003000; user_main(); (Note that in the actual code, the function signature is typedef'd and the address is a macro.) I would rather calculate that (virtual) address from the physical address, and have something like: void (*user_main) (void); user_main = (void (*) (void)) (0x1D003000 | 0x80000000); user_main(); ...but when I try that I get a compiler error: Error #474: ; 0: no psect specified for function variable/argument allocation Have I tripped over some vagarity of C syntax here? This error doesn't reference any particular line, but if I comment out the user_main() call, it goes away. (This might be the compiler removing a redundant code branch, but the HiTech PICC32 isn't particularly smart in Lite mode, so maybe not.)

    Read the article

  • looking for information on porting Linux apps to windows

    - by claws
    Today I've encountered a very good book : UNIX to Linux® Porting: A Comprehensive Reference By Alfredo Mendoza, Chakarat Skawratananond, Artis Walker This reminded me of the thing I always wanted to know. "Porting Linux apps to Windows". I mean porting native Linux apps to native Windows with no platforms involved. If I can find any good book which explains this topic. I've lot of amazing linux command line tools in mind which needs a windows port. Please point me to relevant articles/tutorials/books. PS: please don't tell me to use linux emulation platforms like Cygwin.

    Read the article

  • Exposing APIs - third party or homegrown?

    - by amelvin
    Parts of the project I'm currently working on (I can't give details) will be exposed as an API at some point over the next few months and I was wondering whether anyone could recommend a third party API 'provider' (possibly Mashery or SO advertiser Webservius). And I mean recommend in the 'I've used these people and they are good' sense because although I can google for an answer to this question it is more difficult to get truly unbiased opinions. As an addendum is there much mileage in creating a bespoke API solution and has anyone had any joy going down that road? Thanks in anticipation.

    Read the article

  • What guides or standards do you use for version control in your team ?

    - by PaulHurleyuk
    I'm starting to do a small amount of development within my company. I'm intending to use Git for version control, and I'm interested to see what guidelines or standards people are using around version in their groups, similar to coding standards are often written within the group for the group. I'm assuming there will be things like; Commit often (at least every day/week/meeting etc) Release builds are always made from the master branch Prior to release, a new branch will be created for Testing and tagged as such. only bug fixes from this point onwards. The final release of this will be tagged as such and the bug fixes merged back into the trunk Each developer will have a public repo New features should get their own branch Obviously a lot of this will depend on what cvs you're using and how you've structured it. Similar Questions; http://stackoverflow.com/questions/273695/git-branch-naming-best-practices http://stackoverflow.com/questions/2006265/is-there-an-standard-naming-convention-for-git-tags

    Read the article

  • What are best practices for securing the admin section of a website?

    - by UpTheCreek
    I'd like to know what people consider best practice for securing the Admin sections of websites, specifically from an authentication/access point of view. Of course there are obvious things, such as using SSL and logging all access, but I'm wondering just where above these basic steps people consider the bar to be set. For example: Are you just relying on the same authentication mechanism that you use for normal users? If not, what? Are you running the Admin section in the same 'application domain'? What steps do you take to make the admin section undiscovered? (or do you reject the while 'obscurity' thing)

    Read the article

  • Codeigniter: A nice straight forward tutorial on how to build a reset password/forgotten password?

    - by Psychonetics
    I've built a full sign up system with user account activation, login, validation, captcha etc. To complete this I now need to implement a forgot password/reset password feature.. I have created one function that generates a random password 8 characters, another method that takes that random password word and applies sha1 and hashing. Also one that takes that hashed password and stores it in a table in the database. I will keep these methods to one side as they might come in handy later on but for now I would like to know if anyone can point me to a nice tutorial for creating a password reset feature for my website. Thanks in advance

    Read the article

  • With ASP .NET, how do I display a side submenu depending on what user picked from the main menu?

    - by Ole Media
    Hello, I come from PHP side, but I'm in need to develop a site using ASP .NET So far I manage to create a master page. Now I'm dealing with menus and submenus What I'm trying to do is, if the user picks "About Us" from the main menu, under the about us page I want to display a submenu with the options "who we are, what we do, contact us" If the user picks "Our Products" from the main menu, then under the products page I want to display a submenu with the options "product 1, product 2, product 3" Is this possible with only one master page? I read something about menu control but not sure that is what I need. So far I found how to display a sitemap, but not specific sections of a sitemap, if this is the way to do things. Any links, sample code, point of reference will be appreciated. Thanks

    Read the article

  • Using WCF HttpBindings on a LAN

    - by dcw
    We have a WCF-based client server that operates over a LAN. We've been getting along ok by using the NetTcpBinding, chosen because we couldn't get either HttpBinding to work between hosts. (Within a single host works fine, but is not useful for the production environment.) We're now back at the point where we want to explore using either BasicHttpBinding or WsHttpBinding, but we simply can't see the server from the client: even putting in the path to the endpoint into IE fails to see the server. Is there something simple we've overlooked? We're not specifying any security settings (or anything else, for that matter). Should we be doing so (e.g. explicitly setting security settings to None)?

    Read the article

  • Git tool to remove lines from staging if they consist only of changes in whitespace

    - by Max Howell
    The point in removing trailing whitespace is that if everyone does it always then you end up with a diff that is minimal, ie. it consists only of code changes and not whitespace changes. However when working with other people who do not practice this, removing all trailing whitespace with your editor or a pre-commit hook results in an even worse diff. You are doing the opposite of your intention. So I am asking here if there is a tool that I can run manually before I commit that unstages lines from staging that are only changes in whitespace. Also a bonus would be to change the staged line to have trailing whitespace removed for lines that have code changes. Also a bonus would be to not do this to Markdown files (as trailing space has meaning in Markdown). I am asking here as I fully intend to write this tool if it doesn't already exist.

    Read the article

  • Generating Running Sum of Ratings in SQL

    - by Koobz
    I have a rating table. It boils down to: rating_value created +2 april 3rd -5 april 20th So, every time someone gets rated, I track that rating event in the database. I want to generate a rating history/time graph where the rating is the sum of all ratings up to that point in time on a graph. I.E. A person's rating on April 5th might be select sum(rating_value) from ratings where created <= april 5th The only problem with this approach is I have to run this day by day across the interval I'm interested in. Is there some trick to generating a running total using this sort of data? Otherwise, I'm thinking the best approach is to create a denormalized "rating history" table alongside the individual ratings.

    Read the article

  • Walking a hierarchy table with Linq

    - by Retrocoder
    I have a table with two columns, GroupId and ParentId (both are GUIDS). The table forms a hierarchy so I can look for a value in the “GroupId” filed, when I have found it I can look at its ParentId. This ParentId will also appear in the GroupId of a different record. I can use this to walk up the hierarchy tree from any point to the root (root is an empty GUID). What I’d like to do is get a list of records when I know a GroupId. This would be the record with the GroupId and all the parents back to the root record. Is this possible with Linq and if so, can anyone provide a code snippet?

    Read the article

  • Revisions: algorithm and data structure

    - by SODA
    Hi, I need ideas for structuring and processing data with revisions. For example, I have a database of objects (e.g. cars). Each object has a number of properties, which can be arbitrary, so there's no a set schema to describe these objects. These objects are probably saved as key-value pairs. Now I need to change property of an object. I don't want to completely rewrite it - I want to be able to go back and see history of changes to these properties, that's why I want to add new property and keep the old one (so I guess a timestamp would do the job of telling which property is the latest). At the same time I want to be able to get info about any object in a snap, with only latest versions of each of the properties. Any ideas what would be the best approach? At least please point me in the right direction. Thanks!

    Read the article

  • Prototype mouseleave for two elements

    - by TenJack
    I have created a small navigation element that is positioned right on top of another element. It is only shown when a user mousenters/mouseovers the main element. I am having some trouble with the prototype. I would like this small nav element to be hidden when a user mouses out of the main box, but I would also like the small nav element to remain visible if a user mouses out of the main box but mouses into the small nav at the same time. This is my attempt so far with some pseudo-code to hopefully explain: $('main_box').observe('mouseenter', function(){ $('small_div').show() }) $('main_box').observe('mouseleave', function(){ if this element is $('small_div') then Event.stop() $('small_div').observe('mouseleave', function(){ if this element is $('main_box') Event.stop observe $('main_box') mouseleave else $('small_div').hide(); }) else $('small_div').hide(); }) The main thing I'm having trouble with is figuring out what element the mouse is over at a given point in time. Is there a way to do something like: on mouseleave do blah unless the mouse is over a specific element then do not do blah?

    Read the article

  • Can I bind multiple forms to a single model using the default model binder?

    - by MedicineMan
    I have a complex page with several forms on it. The page is divided into sections, and each section has a continue button on it. The page is bound to a pageViewModel, each section addresses a different set of properties on the model. The continue button makes an ajax call to the controller, and the model binder binds it appropriately to the appropriate sections of the model. The section is refreshed appropriately. Finally, I would like to have a save button at the bottom of the page that takes all the forms, and binds all of the forms to the model. The model, at this point has all of the properties filled out, and can be processed accordingly. Can I accomplish this by some ASP MVC magic?

    Read the article

  • how can multiple trailing slashes can be removed from an url in Ruby

    - by splintercell
    Hello, What i'm trying to achieve here is lets say we have two example urls: url1 "http://emy.dod.com/kaskaa/dkaiad/amaa//////////" & url2 = "http://www.example.com/". How can I extract the striped down urls? url1 : "http://emy.dod.com/kaskaa/dkaiad/amaa" & url2 to "http://http://www.example.com"? URI.parse in ruby sanitizes certain type of malformed url but is ineffective in this case. If we use regex then /^(.*)\/$/ removes a single slash (/) from url1 & is ineffective for url2. Is anybody aware of how to handle this type of url parsing? The point here is I dont want my system to have "http://www.example.com/" & "http://www.example.com" being treated as two different urls. And same goes for "http://emy.dod.com/kaskaa/dkaiad/amaa////" & "http://emy.dod.com/kaskaa/dkaiad/amaa/" cheers, -dg

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >