Search Results

Search found 61580 results on 2464 pages for 'document based database'.

Page 649/2464 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • Defining an Entity Framework 1:1 association

    - by Craig Fisher
    I'm trying to define a 1:1 association between two entities (one maps to a table and the other to a view - using DefinedQuery) in an Entity Framework model. When trying to define the mapping for this in the designer, it makes me choose the (1) table or view to map the association to. What am I supposed to choose? I can choose either of the two tables but then I am forced to choose a column from that table (or view) for each end of the relationship. I would expect to be able to choose a column from one table for one end of the association and a column from the other table for the other end of the association, but there's no way to do this. Here I've chosen to map to the "DW_ WF_ClaimInfo" view and it is forcing me to choose two columns from that view - one for each end of the relationship. I've also tried defining the mapping manually in the XML as follows: <AssociationSetMapping Name="Entity1Entity2" TypeName="ClaimsModel.Entity1Entity2" StoreEntitySet="Entity1"> <EndProperty Name="Entity2"> <ScalarProperty Name="DOCUMENT" ColumnName="DOCUMENT" /> </EndProperty> <EndProperty Name="Entity1"> <ScalarProperty Name="PK_DocumentId" ColumnName="PK_DocumentId" /> </EndProperty> </AssociationSetMapping> But this gives: Error 2010: The Column 'DOCUMENT' specified as part of this MSL does not exist in MetadataWorkspace. Seems like it still expects both columns to come from the same table, which doesn't make sense to me. Furthermore, if I select the same key for each end, e.g.: <AssociationSetMapping Name="Entity1Entity2" TypeName="ClaimsModel.Entity1Entity2" StoreEntitySet="Entity1"> <EndProperty Name="Entity2"> <ScalarProperty Name="DOCUMENT" ColumnName="PK_DocumentId" /> </EndProperty> <EndProperty Name="Entity1"> <ScalarProperty Name="PK_DocumentId" ColumnName="PK_DocumentId" /> </EndProperty> </AssociationSetMapping> I then get: Error 3021: Problem in Mapping Fragment starting at line 675: Each of the following columns in table AssignedClaims is mapped to multiple conceptual side properties: AssignedClaims.PK_DocumentId is mapped to <AssignedClaimDW_WF_ClaimInfo.DW_WF_ClaimInfo.DOCUMENT, AssignedClaimDW_WF_ClaimInfo.AssignedClaim.PK_DocumentId> What am I not getting?

    Read the article

  • javascript arrays and type conversion inconsistencies

    - by ForYourOwnGood
    I have been playing with javascript arrays and I have run into, what I feel, are some inconsistencies, I hope someone can explain them for me. Lets start with this: var myArray = [1, 2, 3, 4, 5]; document.write("Length: " + myArray.length + "<br />"); for( var i in myArray){ document.write( "myArray[" + i + "] = " + myArray[i] + "<br />"); } document.write(myArray.join(", ") + "<br /><br />"); Length: 5 myArray[0] = 1 myArray[1] = 2 myArray[2] = 3 myArray[3] = 4 myArray[4] = 5 1, 2, 3, 4, 5 There is nothing special about this code, but I understand that a javascript array is an object, so properities may be add to the array, the way these properities are added to an array seems inconsistent to me. Before continuing, let me note how string values are to be converted to number values in javascript. Nonempty string - Numeric value of string or NaN Empty string - 0 So since a javascript array is an object the following is legal: myArray["someThing"] = "someThing"; myArray[""] = "Empty String"; myArray["4"] = "four"; for( var i in myArray){ document.write( "myArray[" + i + "] = " + myArray[i] + "<br />"); } document.write(myArray.join(", ") + "<br /><br />"); Length: 5 myArray[0] = 1 myArray[1] = 2 myArray[2] = 3 myArray[3] = 4 myArray[4] = four myArray[someThing] = someThing myArray[] = Empty String 1, 2, 3, 4, four The output is unexpected. The non empty string "4" is converted into its numeric value when setting the property myArray["4"], this seems right. However the empty string "" is not converted into its numeric value, 0, it is treated as an empty string. Also the non empty string "something" is not converted to its numeric value, NaN, it is treated as a string. So which is it? is the statement inside myArray[] in numeric or string context? Also, why are the two, non numeric, properities of myArray not included in myArray.length and myArray.join(", ")?

    Read the article

  • Problem: adding feature in MOSS 2007

    - by Anoop
    Hi All, I have added a menu item in 'Actions' menu of a document library as follows(Using features: explained in MSDN How to: Add Actions to the User Interface: http://msdn.microsoft.com/en-us/library/ms473643.aspx): feature.xml as follows: <?xml version="1.0" encoding="utf-8" ?> <Feature Id="51DEF381-F853-4991-B8DC-EBFB485BACDA" Title="Import From REP" Description="This example shows how you can customize various areas inside Windows SharePoint Services." Version="1.0.0.0" Scope="Site" xmlns="http://schemas.microsoft.com/sharepoint/"> <ElementManifests> <ElementManifest Location="ImportFromREP.xml" /> </ElementManifests> </Feature> ImportFromREP.xml is as follows: <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <!-- Document Library Toolbar Actions Menu Dropdown --> <CustomAction Id="ImportFromREP" RegistrationType="List" RegistrationId = "101" GroupId="ActionsMenu" Rights="AddListItems" Location="Microsoft.SharePoint.StandardMenu" Sequence="1000" Title="Import From REP"> <UrlAction Url="/_layouts/ImportFromREP.aspx?ActionsMenu"/> </CustomAction> </Elements> I have successfully installed and activated the feature. Normal case: Now if i login with the user having atleast 'Contribute' permission on the document library, i am able to see the menu item 'Import From REP' (in 'Actions' menu)in root folder as well as in all subfolders. Problem case: If user is having view rights on the document library but add/delete rights on a perticular subfolder(say 'subfolder') inside the document library: Menu item 'Import From REP' is not visible on root folder level as well as 'Upload' menu is also not visible. which is o.k. because user does not have AddListItems right on root folder but it is also not visible in 'subfolder' while 'Upload' menu is visible as user has add/delete rights on the 'subfolder'. did i mention a wrong value of attribute Rights(Rights="AddListItems") in the xml file?? If so, what should be the correct value? What should i do to simulate the behaviour of 'Upload' menu??(i.e. when 'Upload' menu is visible, 'Import From REP' menu is visible and when 'Upload' menu is not visible 'Import From REP' menu is also not visible. ) Thanks in advance Anoop

    Read the article

  • jquery - establishing truths when loading inline javascript via AJAX

    - by yaya3
    I have thrown together a quick prototype to try and establish a few very basic truths regarding what inline JavaScript can do when it is loaded with AJAX: index.html <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script> </head> <body> <script type="text/javascript"> $('p').css('color','white'); alert($('p').css('color')); // DISPLAYS FIRST but is "undefined" $(document).ready(function(){ $('#ajax-loaded-content-wrapper').load('loaded-by-ajax.html', function(){ $('p').css('color','grey'); alert($('p').css('color')); // DISPLAYS LAST (as expected) }); $('p').css('color','purple'); alert($('p').css('color')); // DISPLAYS SECOND }); </script> <p>Content not loaded by ajax</p> <div id="ajax-loaded-content-wrapper"> </div> </body> </html> loaded-by-ajax.html <p>Some content loaded by ajax</p> <script type="text/javascript"> $('p').css('color','yellow'); alert($('p').css('color')); // DISPLAYS THIRD $(document).ready(function(){ $('p').css('color','pink'); alert($('p').css('color')); // DISPLAYS FOURTH }); </script> <p>Some content loaded by ajax</p> <script type="text/javascript"> $(document).ready(function(){ $('p').css('color','blue'); alert($('p').css('color')); // DISPLAYS FIFTH }); $('p').css('color','green'); alert($('p').css('color')); // DISPLAYS SIX </script> <p>Some content loaded by ajax</p> Notes: a) All of the above (except the first) successfully change the colour of all the paragraphs (in firefox 3.6.3). b) I've used alert instead of console.log as console is undefined when called in the 'loaded' HTML. Truths(?): $(document).ready() does not treat the 'loaded' HTML as a new document, or reread the entire DOM tree including the loaded HTML JavaScript that is contained inside 'loaded' HTML can effect the style of existing DOM nodes One can successfully use the jQuery library inside 'loaded' HTML to effect the style of existing DOM nodes One can not use the firebug inside 'loaded' HTML can effect the existing DOM (proven by Note a) Am I correct in deriving these 'truths' from my tests (test validity)? If not, how would you test for these?

    Read the article

  • jquery - loading inline javascript via AJAX

    - by yaya3
    I have thrown together a quick prototype to try and establish a few very basic truths regarding what inline JavaScript can do when it is loaded with AJAX: index.html <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script> </head> <body> <script type="text/javascript"> $('p').css('color','white'); alert($('p').css('color')); // DISPLAYS FIRST but is "undefined" $(document).ready(function(){ $('#ajax-loaded-content-wrapper').load('loaded-by-ajax.html', function(){ $('p').css('color','grey'); alert($('p').css('color')); // DISPLAYS LAST (as expected) }); $('p').css('color','purple'); alert($('p').css('color')); // DISPLAYS SECOND }); </script> <p>Content not loaded by ajax</p> <div id="ajax-loaded-content-wrapper"> </div> </body> </html> loaded-by-ajax.html <p>Some content loaded by ajax</p> <script type="text/javascript"> $('p').css('color','yellow'); alert($('p').css('color')); // DISPLAYS THIRD $(document).ready(function(){ $('p').css('color','pink'); alert($('p').css('color')); // DISPLAYS FOURTH }); </script> <p>Some content loaded by ajax</p> <script type="text/javascript"> $(document).ready(function(){ $('p').css('color','blue'); alert($('p').css('color')); // DISPLAYS FIFTH }); $('p').css('color','green'); alert($('p').css('color')); // DISPLAYS SIX </script> <p>Some content loaded by ajax</p> Notes: a) All of the above (except the first) successfully change the colour of all the paragraphs (in firefox 3.6.3). b) I've used alert instead of console.log as console is undefined when called in the 'loaded' HTML. Truths(?): $(document).ready() does not treat the 'loaded' HTML as a new document, or reread the entire DOM tree including the loaded HTML, it is pointless inside AJAX loaded content JavaScript that is contained inside 'loaded' HTML can effect the style of existing DOM nodes One can successfully use the jQuery library inside 'loaded' HTML to effect the style of existing DOM nodes One can not use the firebug inside 'loaded' HTML can effect the existing DOM (proven by Note a) Am I correct in deriving these 'truths' from my tests (test validity)? If not, how would you test for these?

    Read the article

  • Java Thread Management and Application Flow

    - by user119179
    I have a Java application that downloads information (Entities) from our server. I use a Download thread to download the data. The flow of the download process is as follows: Log in - The user entity is downloaded Based on the User Entity, download a 'Community' entities List and Display in drop down Based on Community drop down selection, Download and show 'Org Tree' in a JTree Based on Node selection, download Category entities and display in drop down Based on Category selection, download Sub Category entities and display in drop down Based on Sub Category selection download a large data set and save it The download occurs in a thread so the GUI does not 'freeze'. It also allows me to update a Progress Bar. I need help with managing this process. The main problem is when I download entity data I have to find a way to wait for the thread to finish before attempting to get the entity and move to the next step in the app flow. So far I have used a modal dialog to control flow. I start the thread, pop up a modal and then dispose of the modal when the thread is finished. The modal/thread are Observer/Observable the thread does a set changed when it is finished and the dialog disposes. Displaying a modal effectively stops the flow of the application so it can wait for the download to finish. I also tried just moving all the work flow to Observers. All relevant GUI in the process are Observers. Each update method waits for the download to finish and then calls the next piece of GUI which does its own downloading. So far I found these two methods produce code that is hard to follow. I would like to 'centralize' this work flow so other developers are not pulling out their hair when they try to follow it. My Question is: Do you have any suggestions/examples where a work flow such as this can be managed in a way that produces code that is easy to follow? I know 'easy' is a relative term and I know my both my options already work but I would like to get some ideas from other coders while I still have time to change it. Thank you very much.

    Read the article

  • How can I read/write data from a file?

    - by samy
    I'm writing a simple chrome extension. I need to create the ability to add sites URLs to a list, or read from the list. I use the list to open the sites in the new tabs. I'm looking for a way to have a data file I can write to, and read from. I was thinking on XML. I read there is a problem changing the content of files with Javascript. Is XML the right choice for this kinda thing? I should add that there is no web server, and the app will run locally, so maybe the problem websites are having are not same as this. Before I wrote this question, I tried one thing, and started to feel insecure because it didn't work. I made a XML file called Site.xml: <?xml version="1.0" encoding="utf-8" ?> <Sites> <site> <url> http://www.sulamacademy.com/AddMsgForum.asp?FType=273171&SBLang=0&WSUAccess=0&LocSBID=20375 </url> </site> <site> <url> http://www.wow.co.il </url> </site> <site> <url> http://www.Google.co.il </url> </site> I made this script to read the data from him, and put in on the html file. function LoadXML() { var ajaxObj = new XMLHttpRequest(); ajaxObj.open('GET', 'Sites.xml', false); ajaxObj.send(); var myXML = ajaxObj.responseXML; document.write('<table border="2">'); var prs = myXML.getElementsByTagName("site"); for (i = 0; i < prs.length; i++) { document.write("<tr><td>"); document.write(prs[i].getElementsByName("url")[0].childNode[0].nodeValue); document.write("</td></tr>"); } document.write("</table"); }

    Read the article

  • javascript problems when generating html reports from within java

    - by posdef
    Hi, I have been working on a Java project in which the reports will be generated in HTML, so I am implementing methods for creating these reports. One important functionality is to be able to have as much info as possible in the tables, but still not clutter too much. In other words the details should be available if the user wishes to take a look at them but not necessarily visible by default. I have done some searching and testing and found an interesting template for hiding/showing content with the use of CSS and javascript, the problem is that when I try the resultant html page the scripts dont work. I am not sure if it's due a problem in Java or in the javascript itself. I have compared the html code that java produces to the source where I found the template, they seem to match pretty well. Below are bits of my java code that generates the javascript and the content, i would greatly appreciate if anyone can point out the possible reasons for this problem: //goes to head private void addShowHideScript() throws IOException{ StringBuilder sb = new StringBuilder(); sb.append("<script type=\"text/javascript\" language=\"JavaScript\">\n"); sb.append("<!--function HideContent(d) {\n"); sb.append("document.getElementById(d).style.display=\"none\";}\n"); sb.append("function ShowContent(d) {\n"); sb.append("document.getElementById(d).style.display=\"block\";}\n"); sb.append("function ReverseDisplay(d) {\n"); sb.append("if(document.getElementById(d).style.display==\"none\")\n"); sb.append("{ document.getElementById(d).style.display=\"block\"; }\n"); sb.append("else { document.getElementById(d).style.display=\"none\"; }\n}\n"); sb.append("//--></script>\n"); out.write(sb.toString()); out.newLine(); } // body private String linkShowHideContent(String pathname, String divname){ StringBuilder sb = new StringBuilder(); sb.append("<a href=\"javascript:ReverseDisplay('"); sb.append(divname); sb.append("')\">"); sb.append(pathname); sb.append("</a>"); return sb.toString(); } // content out.write(linkShowHideContent("hidden content", "ex")); out.write("<div id=\"ex\" style=\"display:block;\">"); out.write("<p>Content goes here.</p></div>");

    Read the article

  • How to Pass a JS Variable to a PHP Variable

    - by dmullins
    Hello: I am working on a web application that currently provides a list of documents in a MySql database. Each document in the list has an onclick event that is suppose to open the specific document, however I am unable to do this. My list gets popluated using Ajax. Here is the code excerpt (JS): function stateChanged() { if (xmlhttp.readyState==4) { //All documents// document.getElementById("screenRef").innerHTML="<font id='maintxt'; name='title'; onclick='fileopen()'; color='#666666';>" + xmlhttp.responseText + "</font>"; } } } The onclick=fileopen event above triggers the next code excerpt, which downloads the file (JS): function stateChanged() { if (xmlhttp.readyState==4) { //Open file var elemIF = document.createElement("iframe"); elemIF.src = url; elemIF.style.display = "none"; document.body.appendChild(elemIF); } } } Lastly, the openfile onclick event triggers the following php code to find the file for downloading (php): $con = mysql_connect("localhost", "root", "password"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Documents", $con); $query = mysql_query("SELECT name, type, size, content FROM upload WHERE name = NEED JS VARIABLE HERE"); $row = mysql_fetch_array($query); header ("Content-type: ". $row['type']); header ("Content-length: ". $row['size']); header ("Content-Disposition: attachement; filename=". $row['name']); echo $row['content']; This code works well, for downloading a file. If I omit the WHERE portion of the Sql query, the onclick event will download the first file. Somehow, I need to get the screenRef element text and change it into a variable that my php file can read. Does anyone know how to do this? Also, without any page refreshes. I really appreciate everyone's feedback and thank you in advance. DFM

    Read the article

  • Stumbleupon type query...

    - by Chris Denman
    Wow, makes your head spin! I am about to start a project, and although my mySql is OK, I can't get my head around what required for this: I have a table of web addresses. id,url 1,http://www.url1.com 2,http://www.url2.com 3,http://www.url3.com 4,http://www.url4.com I have a table of users. id,name 1,fred bloggs 2,john bloggs 3,amy bloggs I have a table of categories. id,name 1,science 2,tech 3,adult 4,stackoverflow I have a table of categories the user likes as numerical ref relating to the category unique ref. For example: user,category 1,4 1,6 1,7 1,10 2,3 2,4 3,5 . . . I have a table of scores relating to each website address. When a user visits one of these sites and says they like it, it's stored like so: url_ref,category 4,2 4,3 4,6 4,2 4,3 5,2 5,3 . . . So based on the above data, URL 4 would score (in it's own right) as follows: 2=2 3=2 6=1 What I was hoping to do was pick out a random URL from over 2,000,000 records based on the current users interests. So if the logged in user likes categories 1,2,3 then I would like to ORDER BY a score generated based on their interest. If the logged in user likes categories 2 3 and 6 then the total score would be 5. However, if the current logged in user only like categories 2 and 6, the URL score would be 3. So the order by would be in context of the logged in users interests. Think of stumbleupon. I was thinking of using a set of VIEWS to help with sub queries. I'm guessing that all 2,000,000 records will need to be looked at and based on the id of the url it will look to see what scores it has based on each selected category of the current user. So we need to know the user ID and this gets passed into the query as a constant from the start. Ain't got a clue! Chris Denman

    Read the article

  • Managing multiple AJAX calls to PHP scripts

    - by relativelycoded
    I have a set of 5 HTML dropdowns that act as filters for narrowing results returned from a mySQL database. The three pertinent filters are for "Province", "Region", and "City" selection, respectively. I have three functions: findSchools(), which runs when any of the filters (marked with CSS class .filter) are changed, and fetches the results via AJAX from a PHP script. Once that is done, two other functions are called... changeRegionOptions(), which, upon changing the "Province" filter, and updates the available options using the same method as the first function, but posting to a different script. changeCityOptions(), which runs if the "Region" filter was changed, and updates options, again using the same method. The problem is that since I want these AJAX functions to run simultaneously, and they by nature run asynchronously, I've tried using $.when to control the execution of the functions, but it doesn't fix the problem. The functions run, but the Region and City filters return blank (no options); the FireBug report shows absolutely no output, even though the POST request went through. The posted parameter for filter_province gets sent normally, but the one for region gets cut off at the end -- it sends as filter_region=, with no value passed. So I'm presuming my logic is wrong somewhere. The code is below: // When any of the filters are changed, let's query the database... $("select.filter").change(function() { findSchools(); }); // First, we see if there are any results... function findSchools() { var sch_province = document.filterform.filter_province.value; var sch_region = document.filterform.filter_region.value; var sch_city = document.filterform.filter_city.value; var sch_cat = document.filterform.filter_category.value; var sch_type = document.filterform.filter_type.value; $.post("fetch_results.php", { filter_province : sch_province, filter_region : sch_region, filter_city : sch_city, filter_category : sch_cat, filter_type : sch_type }, function(data) { $("#results").html(""); $("#results").hide(); $("#results").html(data); $("#results").fadeIn(600); } ); // Once the results are fetched, we want to see if the filter they changed was the one for Province, and if so, update the Region and City options to match that selection... $("#filter_province").change(function() { $.when(findSchools()) .done(changeRegionOptions()); $.when(changeRegionOptions()) .done(changeCityOptions()); }); }; This is just one of the ways I've tried to solve it; I've tried using an IF statement, and tried calling the functions directly inside the general select.filter.change() function (after findSchools(); ), but they all return the same result. Any help with this would be great!

    Read the article

  • What is the best way to read and write cXML documents in C# ?

    - by tetranz
    I know this is a vague open ended question. I'm hoping to get some general direction. I need to add cXML punchout to an ASP.NET C# site / application. This is replacing something that I wrote years ago in ColdFusion. I'm a reasonably experienced C# developer but I haven't done much with XML. There seems to be lots of different options for processing XML in .NET. Here's the open ended question: Assuming that I have an XML document in some form, eg a file or a string, what is the best way to read it into my code? I want to get the data and then query databases etc. The cXML document size and our traffic volumes are easily small enough so that loading the a cXML document into memory is not a problem. Should I: 1) Manually build classes based on the dtd and use the XML Serializer? 2) Use a tool to generate classes. There are sample cXML files downloadable from Ariba.com. I tried xsd.exe to generate an xsd and then xsd.exe /c to generate classes. When I try to deserialize I get errors because there seems to be "confusion" around whether some elements should be single values or arrays. I tried the CodeXS online tool but that gives errors in it's log and errors if I try to deserialize a sample document. 2) Create a dataset and ReadXml()? 3) Create a typed dataset and ReadXml()? 4) Use Linq to XML. I often use Linq to Objects so I'm familiar with Linq in general but I'm struggling to see what it gives me in this situation. 5) Some other means. I guess I need to improve my understanding of XML in general but even so ... am I missing some obvious way of doing this? In the old ColdFusion site I found a free component ("tag") which basically ignored any schema and read the XML into a "structure" which is essentially a series of nested hash tables which was then easy to read in code. That was probably quite sloppy but it worked. I also need to generate XML files from my C# objects. Maybe Linq to XML will be good for that. I could start with a default "template" document and manipulate it before saving. Thanks for any pointers ...

    Read the article

  • XSL file handling through javascript

    - by Zaid Iqbal
    i want to handle my xsl file through my javascript code. I made my XSL file but i want to dynamically change my XSL file at run time.As in add more attributes in header or data. My javascript code as follow` <script> function loadXMLDoc(dname) { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest(); } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xhttp.open("GET",dname,false); xhttp.send(""); return xhttp.responseXML; } function displayResult() { xml=loadXMLDoc("cdcatalog.xml"); xsl=loadXMLDoc("cdcatalog.xsl"); // code for IE if (window.ActiveXObject) { ex=xml.transformNode(xsl); document.getElementById("example").innerHTML=ex; } // code for Mozilla, Firefox, Opera, etc. else if (document.implementation && document.implementation.createDocument) { xsltProcessor=new XSLTProcessor(); xsltProcessor.importStylesheet(xsl); resultDocument = xsltProcessor.transformToFragment(xml,document); document.getElementById("example").appendChild(resultDocument); } } </script> My XSL file code: <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <html> <body> <h2>My CD Collection</h2> <table border="1"> <tr bgcolor="#9acd32"> <th align="left">Title</th> <th align="left">Artist</th> <th align="left">Country</th> </tr> <xsl:for-each select="catalog/cd"> <tr> <td><xsl:value-of select="title" /></td> <td><xsl:value-of select="artist" /></td> <td><xsl:value-of select="country" /></td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> `

    Read the article

  • Get latlng and draw a polyline between that two latlng

    - by anup sharma
    I have some issue in my code in that first I want to get latlng of my given address/city name from two text boxes after that i converts it to from position's latlng and to position's latlng and at last i want to draw a polyline between these point and markers on both point also, But now i am trying to draw a line between these points. But still not working this code also no any error in console also. code is here for your help. function getRoute(){ var from_text = document.getElementById("travelFrom").value; var to_text = document.getElementById("travelTo").value; if(from_text == ""){ alert("Enter travel from field") document.getElementById("travelFrom").focus(); } else if(to_text == ""){ alert("Enter travel to field"); document.getElementById("travelTo").focus(); } else{ //google.maps.event.addListener(map, "", function (e) { var myLatLng = new google.maps.LatLng(28.6667, 77.2167); var mapOptions = { zoom: 3, center: myLatLng, mapTypeId: google.maps.MapTypeId.TERRAIN }; var map = new google.maps.Map(document.getElementById('map-canvas'), mapOptions); var geocoder = new google.maps.Geocoder(); var address1 = from_text; var address2 = to_text; var from_latlng,to_latlng; //var prepath = path; //if(prepath){ // prepath.setMap(null); //} geocoder.geocode( { 'address': address1}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { // do something with the geocoded result // alert(results[0].geometry.location); from_latlng = results[0].geometry.location; // from_lat = results[0].geometry.location.latitude; // from_lng = results[0].geometry.location.longitude; // alert(from_latlng); } }); geocoder.geocode( { 'address': address2}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { // do something with the geocoded result to_latlng = results[0].geometry.location; // to_lat = results[0].geometry.location.latitude; // to_lng = results[0].geometry.location.longitude; // results[0].geometry.location.longitude // alert(to_latlng) } }); setTimeout(function(){ var flightPlanCoordinates = [ new google.maps.LatLng(from_latlng), new google.maps.LatLng(to_latlng) ]; //alert("123") var polyline; polyline = new google.maps.Polyline({ path: flightPlanCoordinates, strokeColor: "#FF0000", strokeOpacity: 1.0, strokeWeight: 2 }); polyline.setMap(map); // assign to global var path // path = polyline; },4000); // }); } }

    Read the article

  • Accessing contents of NativeWindow in a HTML AIR application?

    - by Dan Scotton
    I'm currently building a HTML/JS AIR application. The application needs to display to the user a different 'window' - dependant on whether this is the first time they've launched the application or not. This part is actually fine and I have the code below to do that: if(!startUp()) { // this simply returns a boolean from a local preferences database that gets shipped with the .air // do first time stuff var windowOptions = new air.NativeWindowInitOptions(); windowOptions.systemChrome = 'none'; windowOptions.type = 'lightweight'; windowOptions.transparent = 'true'; windowOptions.resizable = 'false'; var windowBounds = new air.Rectangle(300, 300, 596, 490); var newHtmlLoader = air.HTMLLoader.createRootWindow(true, windowOptions, true, windowBounds); newHtmlLoader.load(new air.URLRequest('cover.html')); } else { // display default window // just set nativeWindow.visible = true (loaded from application.xml) } However, what I want to be able to do is manipulate the html content from within cover.html after it has loaded up. There seems to be plenty of tutorials online of how to move, resize, etc. the NativeWindow, but I simply want access to the NativeWindow's HTML content. For example, how would I add a new paragraph to that page? I've tried the following: newHtmlLoader.window.opener = window; var doc = newHtmlLoader.window.opener.document.documentElement; Using AIR's Introspector console, ....log(doc) returns [object HTMLHtmlElement]. Hmm, seems promising right? I then go on to try: var p = document.createElement('p'); var t = document.createTextNode('Insert Me'); p.appendChild(t); doc.appendChild(p); ...but nothing gets inserted. I've also tried the following replacements for doc: var doc = newHtmlLoader.window.opener.document.body; // .log(doc) -> [object HTMLBodyElement] var doc = newHtmlLoader.window.opener.document; // .log(doc) -> Error: HIERARCHY_REQUEST_ERR: DOM Exception 3 ...as well as the following with jQuery: $(doc).append('<p>Insert Me</p>'); // again, nothing So, anyone had any experience in accessing a NativeWindow's inner content programmatically? Any help will be greatly appreciated.

    Read the article

  • impress.js Navigation with active class

    - by ArtGoddess
    I would like to use impress.js in order to make a website, with a navigation bar. In each slide, the corresponding anchor or div must have an "active" class, in order to achieve a different link appearance. I have followed all instructions provided here: https://github.com/Ralt/impress.js/commit/f88feb8cae704ce27bd2168bdb77768f516ac2da#L2R605 but the "active" class on the correct menu must be added. This is the code that generates the menu items. How can I add for example the "active" class in the first link? /* ************************ MENU ***************************** */ // Capitalize first letter of string // @see http://stackoverflow.com/a/1026087/851498 var capitalize = function( str ) { return str.charAt(0).toUpperCase() + str.slice(1); } // `showMenu` API function creates the menu // It defines the names of each entry by the id capitalized. var showMenu = function() { // Create the menu wrapper and the element that will be cloned // for each entry. var menu = document.createElement('div'), el = document.createElement('div'); // Apply some classes menu.className = 'menu'; el.className = 'menu-item'; // Create an element that will be the "button" and append it // to the menu var button = document.createElement('div'); button.className = 'menu-button'; button.textContent = 'Menu'; menu.appendChild(button); // Now, for each div in #impress, add an entry to the menu [].forEach.call(byId('impress').firstElementChild.children, function( child, index ) { var newEl = el.cloneNode(), i = index + 1, text = i + '. ' + capitalize(child.id); // Set the text of the new element newEl.textContent = text; // Add an onclick event to the new element // We need to use a closure to make sure the index is correct (function( index ) { newEl.addEventListener('click', function() { goto(index); }); }( index )); // And append the new element to the menu menu.appendChild(newEl); }); // And append the menu to #impress document.body.appendChild(menu); }; Then, in each click it must be removed the active class on the current, and add it to the clicked element. // Remove the active class from the current active document.querySelector( '.active' )[ 0 ].classList.remove( 'active' ); // And add it to the clicked index byId('impress').firstElementChild.children[ index ].classList.add( 'active' ); Where do I have to apply this code? Thank you!

    Read the article

  • Remove links with Javascript

    - by Arlen Beiler
    How do I remove links from a webpage with Javascript. I am using Google Chrome. The code I tried is: function removehyperlinks() { try { alert(document.anchors.length); alert(document.getElementsByTagName('a')); for(i=0;i=document.anchors.length;i++) { var a = document.anchors[i]; a.outerHTML = a.innerHTML; var b = document.getElementsByTagName('a'); b[i].outerHTML = b[i].innerHTML; } } catch(e) { alert (e);} alert('done'); } Of course, this is test code, which is why I have the alerts and 2 things trying at the same time. The first alert returns "0" the second [Object NodeList] and the third returns "done". My html body looks like this: <body onload="removehyperlinks()"> <ol style="text-align:left;" class="messagelist"> <li class="accesscode"><a href="#">General information, Updates, &amp; Meetings<span class="extnumber">141133#</span></a> <ol> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li start="77"><a href="#"">...</a></li> <li start="88"><a href="#">...</a></li> <li start="99"><a href="#">...</a></li> </ol> </li> </ol> </body>

    Read the article

  • Just a small problem regarding javscript BOM question

    - by caramel1991
    The question is this: Create a page with a number of links. Then write code that fires on the window onload event, displaying the href of each of the links on the page. And this is my solution <html> <body language="Javascript" onload="displayLink()"> <a href="http://www.google.com/">First link</a> <a href="http://www.yahoo.com/">Second link</a> <a href="http://www.msn.com/">Third link</a> <script type="text/javascript" language="Javascript"> function displayLink() { for(var i = 0;document.links[i];i++) { alert(document.links[i].href); } } </script> </body> </html> This is the answer provided by the book <html> <head> <script language=”JavaScript” type=”text/javascript”> function displayLinks() { var linksCounter; for (linksCounter = 0; linksCounter < document.links.length; linksCounter++) { alert(document.links[linksCounter].href); } } </script> </head> <body onload=”displayLinks()”> <A href=”link0.htm” >Link 0</A> <A href=”link1.htm”>Link 2</A> <A href=”link2.htm”>Link 2</A> </body> </html> Before I get into the javascript tutorial on how to check user browser version or model,I was using the same method as the example,by acessing the length property of the links array for the loop,but after I read through the tutorial,I find out that I can also use this alternative ways,by using the method that the test condition will evalute to true only if the document.links[i] return a valid value,so does my code is written using the valid method??If it's not,any comment regarding how to write a better code??Correct me if I'm wrong,I heard some of the people say "a good code is not evaluate solely on whether it works or not,but in terms of speed,the ability to comprehend the code,and could posssibly let others to understand the code easily".Is is true??

    Read the article

  • javascript problem

    - by Gourav
    I have created a dynamic table whose rows gets appended by click of the "Add" button, i want the user not to be able to submit the page if no value is entered in all the rows of the table. how do i achieve this The code is <html> <head> <script type="text/javascript"> function addRowToTable() { var tbl = document.getElementById('tblSample'); var lastRow = tbl.rows.length; var iteration = lastRow+1; var row = tbl.insertRow(lastRow); var cellLeft = row.insertCell(0); var textNode = document.createTextNode(iteration); cellLeft.appendChild(textNode); var cellRight = row.insertCell(1); var el = document.createElement('input'); el.type = 'text'; el.name = 'txtRow' + iteration; el.id = 'txtRow' + iteration; el.size = 40; cellRight.appendChild(el); } function validation() { var a=document.getElementById('tblSample').rows.length; for(i=0;i<a;i++) { alert(document.getElementById('tblSample').txtRow[i].value);//this doesnt work } return true; } </script> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> </head> <body> <form name ='qqq' action="sample.html"> <p> <input type="button" value="Add" onclick="addRowToTable();" /> <input type="button" value="Submit" onclick="return validation();" /> </p> <p> <table border="1" id="tblSample"> <tr> <td>1</td> <td>The 1st row</td> </tr> </table> </p> </form> </body> </html> Please suggest

    Read the article

  • It is only uploading first row's file input

    - by user1304328
    I have a application which you can access here. If you open the application please click on the "Add" button a couple of times. This will add a new row into a table below. In each table row there is an AJAX file uploader. Now the problem is that if I click on the "Upload" button in any row except the first row, then the uploading only happens in the first row so it is only uploading the first file input only. Why is it doing this and how can I get it so that when then the user clicks the "Upload" button, the file input within that row of the "Upload" button is uploaded and not the first row being uploaded? Below is the full code where it appends the file AJAX file uploaded in each table row: function insertQuestion(form) { var $tbody = $('#qandatbl > tbody'); var $tr = $("<tr class='optionAndAnswer' align='center'></tr>"); var $image = $("<td class='image'></td>"); var $fileImage = $("<form action='upload.php' method='post' enctype='multipart/form-data' target='upload_target' onsubmit='startUpload();' >" + "<p id='f1_upload_process' align='center'>Loading...<br/><img src='Images/loader.gif' /><br/></p><p id='f1_upload_form' align='center'><br/><label>" + "File: <input name='fileImage' type='file' class='fileImage' /></label><br/><label><input type='submit' name='submitBtn' class='sbtn' value='Upload' /></label>" + "</p> <iframe id='upload_target' name='upload_target' src='#' style='width:0;height:0;border:0px solid #fff;'></iframe></form>"); $image.append($fileImage); $tr.append($image); $tbody.append($tr); } function startUpload(){ document.getElementById('f1_upload_process').style.visibility = 'visible'; document.getElementById('f1_upload_form').style.visibility = 'hidden'; return true; } function stopUpload(success){ var result = ''; if (success == 1){ result = '<span class="msg">The file was uploaded successfully!<\/span><br/><br/>'; } else { result = '<span class="emsg">There was an error during file upload!<\/span><br/><br/>'; } document.getElementById('f1_upload_process').style.visibility = 'hidden'; document.getElementById('f1_upload_form').innerHTML = result + '<label>File: <input name="fileImage" type="file"/><\/label><label><input type="submit" name="submitBtn" class="sbtn" value="Upload" /><\/label>'; document.getElementById('f1_upload_form').style.visibility = 'visible'; return true; }

    Read the article

  • Boxy Submit Form

    - by jornbjorndalen
    I am using the boxy jQuery plugin in my page to display a form on a clickEvent for the fullCalendar plugin. It is working all right , the only problem I have is that the form in boxy brings up the confirmation dialog the first time the dialog is opened and when the user clicks "Ok" it submits the form a second time which generates 2 events on my calendar and 2 entries in my database. My code looks like this inside fullCalendar: dayClick: function(date, allDay, jsEvent, view) { var day=""+date.getDate(); if(day.length==1){ day="0"+day; } var year=date.getFullYear(); var month=date.getMonth()+1; var month=""+month; if(month.length==1){ month="0"+month; } var defaultdate=""+year+"-"+month+"-"+day+" 00:00:00"; var ele = document.getElementById("myform"); new Boxy(ele,{title: "Add Task", modal: true}); document.getElementById("title").value=""; document.getElementById("description").value=""; document.getElementById("startdate").value=""+defaultdate; document.getElementById("enddate").value=""+defaultdate; } I also use validators on the forms fields: $.validator.addMethod( "datetime", function(value, element) { // put your own logic here, this is just a (crappy) example return value.match(/^([0-9]{4})-([0-1][0-9])-([0-3][0-9])\s([0-1][0-9]|[2][0-3]):([0-5][0-9]):([0-5][0-9])$/); }, "Please enter a date in the format YYYY-mm-dd hh:MM:ss" ); var validator=$("#myform").validate({ onsubmit:true, rules: { title: { required: true }, startdate: { required: true, datetime: true }, enddate: { required: true, datetime: true } }, submitHandler: function(form) { //this function renders a new event and makes a call to a php script that inserts it into the db addTask(form); } }); And the form looks like this: <form id ='myform'> <table border='1' width='100%'> <tr><td align='right'>Title:</td><td align='left'><input id='title' name='title' size='30'/></td></tr> <tr><td align='right'>Description:</td><td align='left'><textarea id='description' name='description' rows='4' cols='30'></textarea></td></tr> <tr><td align='right'>Start Date:</td><td align='left'><input id='startdate' name='startdate' size='30'/></td></tr> <tr><td align='right'>End Date:</td><td align='left'><input id='enddate' name='enddate' size='30' /></td></tr> <tr><td colspan='2' align='right'><input type='submit' value='Add' /></td></tr> </table> </form>

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • MSCC: Global Windows Azure Bootcamp

    Mauritius participated and contributed to the Global Windows Azure Bootcamp 2014 (GWAB). Again! And this time stronger than ever, and together with 137 other locations in 56 countries world-wide. We had 62 named registrations, 7 guest additions and approximately 10 offline participants prior to the event day. Most interestingly the organisation of the GWAB through the MSCC helped to increased the number of craftsmen. The Mauritius Software Craftsmanship Community has currently 138 registered members - in less than one year! Only with those numbers we can proudly say that all the preparations and hard work towards this event already paid off. Personally, I'm really grateful that we had this kind of response and the feedback from some attendees confirmed that the MSCC is on the right track here on Cyber Island Mauritius. Inspired and motivated by the success of this event, rest assured that there will be more public events like the GWAB. This time it took some time to reflect on our meetup, following my first impression right on spot: "Wow, what an experience to organise and participate in this global event. Overall, I've been very pleased with the preparations and the event itself. Surely, there have been some nicks that we have to address and to improve for future activities like this. Quite frankly, we are not professional event organisers (not yet) but we learned a lot over the past couple of days. A big Thank You to our event sponsors, namely Microsoft Indian Ocean Islands & French Pacific, Ceridian Mauritius and Emtel. Without them this event wouldn't have happened - Thank You! And to the cool team members of Microsoft Student Partners (MSPs). You geeks did a great job! Thanks!" So, how many attendees did we actually have? 61! - Awesome - 61 cloud computing instances to help on the research of diabetes. During Saturday afternoon there was even an online publication on L'Express: Les développeurs mauriciens se joignent au combat contre le diabète Reactions of other attendees Don't take my word for granted... Here are some impressions and feedback from our participants: "Awesome event, really appreciated the presentations :-)" -- Kevin on event comments "very interesting and enriching." -- Diana on event comments "#gwab #gwabmru 2014 great success. Looking forward for gwab 2015" -- Wasiim on Twitter "Was there till the end. Awesome Event. I'll surely join upcoming meetup sessions :)" -- Luchmun on event comments "#gwabmru was not that cool. left early" -- Mohammad on Twitter The overall feedback is positive but we are absolutely aware that there quite a number of problems we had to face. We are already looking into that and ideas / action plans on how we will be able to improve it for future events. The sessions We started the day with welcoming speeches by Thierry Coret, Sr. Marketing Manager of Microsoft Indian Ocean Islands & French Pacific and Vidia Mooneegan, Managing Director and Sr. Vice President of Ceridian Mauritius. The clear emphasis was on the endless possibilities of cloud computing and how it can enable any kind of sectors here in the country. Then it was about time to set up the cloud computing services in order to contribute each attendees cloud computing resources to the global research of diabetes, a step by step guide presented by Arnaud Meslier, Technical Evangelist at Microsoft. Given a rendering package and a configuration file it was very interesting to follow the single steps in Windows Azure. Also, during the day we were not sure whether the set up had been correctly, as Mauritius didn't show up on the results board - which should have been the case after approximately 20 to 30 minutes. Anyways, let the minions work... Next, Arnaud gave a brief overview of the variety of services Windows Azure has to offer. Whether you need a development environment for your websites or mobiles app, running a virtual machine with your existing applications or simply putting a SQL database online. No worries, Windows Azure has the right packages available and the online management portal is really easy t handle. After this, we got a little bit more business oriented while Wasiim Hosenbocus, employee at Ceridian, took the attendees through the inerts of a real-life application, and demoed a couple of the existing features. He did a great job by showing how the different services of Windows Azure can be created and pulled together. After the lunch break it is always tough to keep the audience awake... And it was my turn. I gave a brief overview on operating and managing a SQL database on Windows Azure. Well, there are actually two options available and depending on your individual requirements you should be aware of both. The simpler version is called SQL Database and while provisioning only takes a couple of seconds, you should take into consideration that SQL Database has a number of constraints, like limitations on the actual database size - up to 5 GB as web edition and up to 150 GB maximum as business edition -, among others. Next, it was Chervine Bhiwoo's session on Windows Azure Mobile Services. It was absolutely amazing to see that the mobiles services directly offers you various project templates, like Windows 8 Store App, Android app, iOS app, and even Xamarin cross-platform app development. So, within a couple of minutes you can have your first mobile app active and running on Windows Azure. Furthermore, Chervine showed the attendees that adding another user interface, like Web Sites running on ASP.NET MVC 4 only takes a couple of minutes, too. And last but not least, we rounded up the day with Windows Azure Websites and hosting of Virtual Machines presented by some members of the local Microsoft Students Partners programme. Surely, one of the big advantages using Windows Azure is the availability of pre-defined installation packages of known web applications, like WordPress, Joomla!, or Ghost. Compared to running your own web site with a traditional web hoster it is surely en par, and depending on your personal level of expertise, Windows Azure provides you more liberty in terms of configuration than maybe a shared hosting environment. Running a pre-defined web application is one thing but in case that you would like to have more control over your hosting environment it is highly advised to opt for a virtual machine. Provisioning of an Ubuntu 12.04 LTS system was very simple to do but it takes some more minutes than you might expect. So, please be patient and take your time while Windows Azure gets everything in place for you. Afterwards, you can use a SecureShell (ssh) client like Putty in case of a Linux-based machine, or Remote Desktop Services when operating a Windows Server system to log in into your virtual machine. At the end of the day we had a great Q&A session and we finalised the event with our raffle of goodies. Participation in the raffle was bound to submission of the session survey and most gratefully we had a give-away for everyone. What a nice coincidence to finish of the day. Note: All session slides (and demo codes) will be made available on the MSCC event page. Please, check the Files section over there. (Some) Visual impressions from the event Just to give you an idea about what has happened during the GWAB 2014 at Ebene... Speakers and Microsoft Student Partners are getting ready for the Global Windows Azure Bootcamp 2014 GWAB 2014 attendees are fully integrated into the hands-on-labs and setting up their individuals cloud computing services 60 attendees at the GWAB 2014. Despite some technical difficulties we had a great time running the event GWAB 2014: Using the lunch break for networking and exchange of ideas - Great conversations and topics amongst attendees There are more pictures on the original event page: Questions & Answers Following are a couple of questions which have been asked and didn't get an answer during the event: Q: Is it possible to upload static pages via FTP? A: Yes, you can. Have a look at the right side column on the dashboard of your website. There you'll find information about the FTP and SFTP host names. You can use any FTP client, like ie. FileZilla to log in. FTP also gives you access to your log files. Q: What are the size limitations on SQL Database? A: 5 GB on the Web Edition, and 150 GB on the business edition. A maximum 150 databases (inclusing 'master') per SQL Database server. More details here: General Guidelines and Limitations (Azure SQL Database) Q: What's the maximum size of a SQL Server running in a Virtual Machine? A: The maximum Windows Azure VM has currently 8 CPU cores, 14 or 56 GB of RAM and 16x 1 TB hard drives. More details here: Virtual Machine and Cloud Service Sizes for Azure Q: How can we register for Windows Azure? A: Mauritius is currently not listed for phone verification. Please get in touch with Arnaud Meslier at Microsoft IOI & FP Q: Can I use my own domain name for Windows Azure websites? A: Yes, you can. But this might require to upscale your account to Standard. In case that I missed a question and answer, please use the comment section at the end of the article. Thanks! Final results Every participant was instructed during the hands-on-lab session on how to set up a cloud computing service in their account. Of course, I won't keep the results from you... Global Azure Lab GWAB 2014: Our cloud computing contribution to the research of diabetes And I would say Mauritius did a good job! Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Launch of Microsoft SQL Server 2014 (15.4.2014) Corsair Hackers Reboot (19.4.2014) WebCup (TBA ~ June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! Networking and job/project opportunities Despite having technical presentations on Windows Azure an event like this always offers a great bunch of possibilities and opportunities to get in touch with new people in IT, have an exchange of experience with other like-minded people. As I already wrote about Communities - The importance of exchange and discussion - I had a great conversation with representatives of the University des Mascareignes which are currently embracing cloud infrastructure and cloud computing for their various campuses in the Indian Ocean. As for the MSCC it would be a great experience to stay in touch with them, and to work on upcoming, common activities. Furthermore, I had a very good conversation with Thierry and Ludovic of Microsoft IOI & FP on the necessity of user groups and IT communities here on the island. It's great to see that the MSCC is currently on a run and that local companies are sharing our thoughts on promoting IT careers and exchange of IT knowledge in such an open way. I'm also looking forward to be able to participate and to contribute on more events in the near future. My resume of the day We learned a lot today and there is always room for improvement! It was an awesome event and quite frankly it was a pleasure to spend the day with so many enthuastic IT people in the same room. It was a great experience to organise such event locally and participate on a global scale to support the GlyQ-IQ Technology in their research on diabetes. I was so pleased to see the involvement of new MSCC members in taking the opportunity to share and learn about the power of cloud computing. The Mauritius Software Craftsmanship Community is on the right way and this year's bootcamp on Windows Azure only marked the beginning of our journey. Thank you to our sponsors and my kudos to the MSPs! Update: Media coverage The event has been reported in local media, too. Following are some resources: Orange - Local - Business: Le cloud, pour des recherches approfondies sur le diabète Maurice Info.mu: Le cloud, pour des recherches approfondies sur le diabète Le Quotidien Pg 2: Global Windows Azure Bootcamp 2014 - Le cloud pour des recherches approfondies sur le diabète The Observer Pg 12: Le cloud, pour des recherches approfondies sur le diabète

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >