Search Results

Search found 12471 results on 499 pages for 'variable naming'.

Page 203/499 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • BB Code Parser (in formatting phase) with jQuery jammed due to messed up loops most likely

    - by Oskar
    Greetings everyone, I'm making a BB Code Parser but I'm stuck on the JavaScript front. I'm using jQuery and the caret library for noting selections in a text field. When someone selects a piece of text a div with formatting options will appear. I have two issues. Issue 1. How can I make this work for multiple textfields? I'm drawing a blank as it currently will detect the textfield correctly until it enters the $("#BBtoolBox a").mousedown(function() { } loop. After entering it will start listing one field after another in a random pattern in my eyes. !!! MAIN Issue 2. I'm guessing this is the main reason for issue 1 as well. When I press a formatting option it will work on the first action but not the ones afterwards. It keeps duplicating the variable parsed. (if I only keep to one field it will never print in the second) Issue 3 If you find anything especially ugly in the code, please tell me how to improve myself. I appriciate all help I can get. Thanks in advance $(document).ready(function() { BBCP(); }); function BBCP(el) { if(!el) { el = "textarea"; } // Stores the cursor position of selection start $(el).mousedown(function(e) { coordX = e.pageX; coordY = e.pageY; // Event of selection finish by using keyboard }).keyup(function() { BBtoolBox(this, coordX, coordY); // Event of selection finish by using mouse }).mouseup(function() { BBtoolBox(this, coordX, coordY); // Event of field unfocus }).blur(function() { $("#BBtoolBox").hide(); }); } function BBtoolBox(el, coordX, coordY) { // Variable containing the selected text by Caret selection = $(el).caret().text; // Ignore the request if no text is selected if(selection.length == 0) { $("#BBtoolBox").hide(); return; } // Print the toolbox if(!document.getElementById("BBtoolBox")) { $(el).before("<div id=\"BBtoolBox\" style=\"left: "+ ( coordX + 5 ) +"px; top: "+ ( coordY - 30 ) +"px;\"></div>"); // List of actions $("#BBtoolBox").append("<a href=\"#\" onclick=\"return false\"><img src=\"./icons/text_bold.png\" alt=\"B\" title=\"Bold\" /></a>"); $("#BBtoolBox").append("<a href=\"#\" onclick=\"return false\"><img src=\"./icons/text_italic.png\" alt=\"I\" title=\"Italic\" /></a>"); } else { $("#BBtoolBox").css({'left': (coordX + 3) +'px', 'top': (coordY - 30) +'px'}).show(); } // Parse the text according to the action requsted $("#BBtoolBox a").mousedown(function() { switch($(this).children(":first").attr("alt")) { case "B": // bold parsed = "[b]"+ selection +"[/b]"; break; case "I": // italic parsed = "[i]"+ selection +"[/i]"; break; } // Changes the field value by replacing the selection with the variable parsed $(el).val($(el).caret().replace(parsed)); $("#BBtoolBox").hide(); return false; }); }

    Read the article

  • Event handle in drop-down menu.

    - by QLiu
    Hello fellows, I am trying to develop a dynamic drop down menu by a customized widget style The custom widget has two main features: Read user's location cookies variable to set the proper contact phone number in the CP pages When users select on the drop down menu, it triggers onChange event, it re-select the contact phone number based on users' selections, but it won't reset the location cookies. My widgets conatins two files: Controller.php: Simply one, it uses to handle get cookies variables class serial extends Widget { function __construct() { parent::__construct(); } function generateWidgetInformation() { $this->info['notes'] = "Serial Number search Box"; } function getData() { //Get cookies code will go here, and pass to view.php $this->data['locale'] = 'gb';// for test purpose now } } view.php is about Presentation layer contains HTML, which get the data from my controller <div style="border: 1px solid black; display: block;" id="<?=$this->instanceID;?>"></div> <script>locale2contact('<?=$this->data['locale']?>', '<?=$this->instanceID;?>');</script> And then the Javascript function, call locale2contact var element_id =''; //Define Global Variables, //Receive the cookies locale, and instance id from view.php function locale2contact(locale, instance_id) { var details = ''; this.element_id=instance_id; //assing the instance id into global variable // alert(instance_id); //Check whether we got the instance id from view files if (locale == 'gb') details = 'UK Contact Details<br>' + build_dropdown(locale); else if (locale == 'fr') details = 'French Contact Details<br>'+build_dropdown(locale); else if (locale == 'be') details = 'Belgian Contact Details<br>'+ build_dropdown(locale); else details = 'Unknown Contact Detail'; writeContactInfo(details); } //Build the drop down menu with pre-selected option by using cookies. function build_dropdown(locale) { var dropdown = '<select onChange=changeContactInfo(this.options[selectedIndex].text)>'; dropdown += '<option' + (locale == 'gb' ? ' selected' : '') + '>UK</option>'; dropdown += '<option' + (locale == 'be' ? ' selected' : '') + '>Belgium</option>'; dropdown += '</select>'; return dropdown; } // Not smart way in here, once the people select the drop down box, it reselect the drop down menu and reset the contact info function changeContactInfo(selected) { var details =''; //alert(this.element_id); //alert(locale); if (selected == 'UK') details = 'UK Contact Details<br>' + build_dropdown2(selected); else if (selected == 'fr') details = 'French Contact Details<br>'+ build_dropdown2(selected); else if (selected == 'Belgium') details = 'Belgian Contact Details<br>'+ build_dropdown2(selected); else details = 'Unknown Contact Detail'; writeContactInfo(details); } //Build the drop down menu function build_dropdown2(selected) { var dropdown = '<select onChange=changeContactInfo(this.options[selectedIndex].text)>'; dropdown += '<option' + (selected == 'UK' ? ' selected' : '') + '>UK</option>'; dropdown += '<option' + (selected == 'Belgium' ? ' selected' : '') + '>Belgium</option>'; dropdown += '</select>'; return dropdown; } //Back to view function writeContactInfo(details) { document.getElementById(this.element_id).innerHTML = details; //update the instance field in view } Javascript function is not efficient. As you see, I got two similar duplicate functions to handle events. Users go to the page, the widget read the cookies variable to display contact info (locale2contact)and preselect the drop-down menu (function build_dropdown) If users select the drop down menu, the displya contact info change (function changeContactInfo), and then I need to rebuild the drop down menu with user previously selection (function build_dropdown2). I am looking for best practices for adding this functionality to RightNow widget. Thank you. I really do not like the way i am doing now. It works; but the code looks bad.

    Read the article

  • Strange behaviour when simply adding strings in Lazarus - FreePascal

    - by linkcharger
    The program has several "encryption" algorithms. This one should blockwise reverse the input. "He|ll|o " becomes "o |ll|He" (block length of 2). I add two strings, in this case appending the result string to the current "block" string and making that the result. When I add the result first and then the block it works fine and gives me back the original string. But when i try to reverse the order it just gives me the the last "block". Several other functions that are used for "rotation" are above. //amount of blocks function amBl(i1:integer;i2:integer):integer; begin if (i1 mod i2) <> 0 then result := (i1 div i2) else result := (i1 div i2) - 1; end; //calculation of block length function calcBl(keyStr:string):integer; var i:integer; begin result := 0; for i := 1 to Length(keyStr) do begin result := (result + ord(keyStr[i])) mod 5; result := result + 2; end; end; //desperate try to add strings function append(s1,s2:string):string; begin insert(s2,s1,Length(s1)+1); result := s1; end; function rotation(inStr,keyStr:string):string; var //array of chars -> string block,temp:string; //position in block variable posB:integer; //block length and block count variable bl, bc:integer; //null character as placeholder n : ansiChar; begin //calculating block length 2..6 bl := calcBl(keyStr); setLength(block,bl); result := ''; temp := ''; {n := #00;} for bc := 0 to amBl(Length(inStr),bl) do begin //filling block with chars starting from back of virtual block (in inStr) for posB := 1 to bl do begin block[posB] := inStr[bc * bl + posB]; {if inStr[bc * bl + posB] = ' ' then block[posB] := n;} end; //adding the block in front of the existing result string temp := result; result := block + temp; //result := append(block,temp); //result := concat(block,temp); end; end; (full code http://pastebin.com/6Uarerhk) After all the loops "result" has the right value, but in the last step (between "result := block + temp" and the "end;" of the function) "block" replaces the content of "result" with itself completely, it doesn't add result at the end anymore. And as you can see I even used a temp variable to try to work around that.. doesnt change anything though.

    Read the article

  • Java, LDAP: Make it not ignore blank passwords?

    - by Steve
    I'm maintaining some legacy Java LDAP code. I know next to nothing about LDAP. The program below basically just sends the userid and password to the LDAP server, receives notification back if the credentials are good. If so, it prints out the LDAP attributes received from the LDAP server, if not it prints out an exception. All works well if a bad password is given. An "invalid credentials" exception gets thrown. However, if a blank password is sent to the LDAP Server, authentication will still happen, LDAP attributes will still be returned. Is this unhappy situation due to the LDAP server allowing blank passwords, or does the code below need to be adjusted such a blank password will get fed to the LDAP server in such a way so it will get rejected? I do have data validation in place. I took it off in a testing environment to solve another issue and noticed this problem. I would prefer not to have this problem underneath the data validation. Thanks much in advance for any information import javax.naming.*; import javax.naming.directory.*; import java.util.*; import java.sql.*; public class LDAPTEST { public static void main(String args[]) { String lcf = "com.sun.jndi.ldap.LdapCtxFactory"; String ldapurl = "ldaps://ldap-cit.smew.acme.com:636/o=acme.com"; String loginid = "George.Jetson"; String password = ""; DirContext ctx = null; Hashtable env = new Hashtable(); Attributes attr = null; Attributes resultsAttrs = null; SearchResult result = null; NamingEnumeration results = null; int iResults = 0; int iAttributes = 0; env.put(Context.INITIAL_CONTEXT_FACTORY, lcf); env.put(Context.PROVIDER_URL, ldapurl); env.put(Context.SECURITY_PROTOCOL, "ssl"); env.put(Context.SECURITY_AUTHENTICATION, "simple"); env.put(Context.SECURITY_PRINCIPAL, "uid=" + loginid + ",ou=People,o=acme.com"); env.put(Context.SECURITY_CREDENTIALS, password); try { ctx = new InitialDirContext(env); attr = new BasicAttributes(true); attr.put(new BasicAttribute("uid",loginid)); results = ctx.search("ou=People",attr); while (results.hasMore()) { result = (SearchResult)results.next(); resultsAttrs = result.getAttributes(); for (NamingEnumeration enumAttributes = resultsAttrs.getAll(); enumAttributes.hasMore();) { Attribute a = (Attribute)enumAttributes.next(); System.out.println("attribute: " + a.getID() + " : " + a.get().toString()); iAttributes++; }// end for loop iResults++; }// end while loop System.out.println("Records == " + iResults + " Attributes: " + iAttributes); }// end try catch (Exception e) { e.printStackTrace(); } }// end function main() }// end class LDAPTEST

    Read the article

  • Defend PHP; convince me it isn't horrible

    - by Jason L
    I made a tongue-in-cheek comment in another question thread calling PHP a terrible language and it got down-voted like crazy. Apparently there are lots of people here who love PHP. So I'm genuinely curious. What am I missing? What makes PHP a good language? Here are my reasons for disliking it: PHP has inconsistent naming of built-in and library functions. Predictable naming patterns are important in any design. PHP has inconsistent parameter ordering of built-in functions, eg array_map vs. array_filter which is annoying in the simple cases and raises all sorts of unexpected behaviour or worse. The PHP developers constantly deprecate built-in functions and lower-level functionality. A good example is when they deprecated pass-by-reference for functions. This created a nightmare for anyone doing, say, function callbacks. A lack of consideration in redesign. The above deprecation eliminated the ability to, in many cases, provide default keyword values for functions. They fixed this in PHP 5, but they deprecated the pass-by-reference in PHP 4! Poor execution of name spaces (formerly no name spaces at all). Now that name spaces exist, what do we use as the dereference character? Backslash! The character used universally for escaping, even in PHP! Overly-broad implicit type conversion leads to bugs. I have no problem with implicit conversions of, say, float to integer or back again. But PHP (last I checked) will happily attempt to magically convert an array to an integer. Poor recursion performance. Recursion is a fundamentally important tool for writing in any language; it can make complex algorithms far simpler. Poor support is inexcusable. Functions are case insensitive. I have no idea what they were thinking on this one. A programming language is a way to specify behavior to both a computer and a reader of the code without ambiguity. Case insensitivity introduces much ambiguity. PHP encourages (practically requires) a coupling of processing with presentation. Yes, you can write PHP that doesn't do so, but it's actually easier to write code in the incorrect (from a sound design perspective) manner. PHP performance is abysmal without caching. Does anyone sell a commercial caching product for PHP? Oh, look, the designers of PHP do. Worst of all, PHP convinces people that designing web applications is easy. And it does indeed make much of the effort involved much easier. But the fact is, designing a web application that is both secure and efficient is a very difficult task. By convincing so many to take up programming, PHP has taught an entire subgroup of programmers bad habits and bad design. It's given them access to capabilities that they lack the understanding to use safely. This has led to PHP's reputation as being insecure. (However, I will readily admit that PHP is no more or less secure than any other web programming language.) What is it that I'm missing about PHP? I'm seeing an organically-grown, poorly-managed mess of a language that's spawning poor programmers. So convince me otherwise!

    Read the article

  • XSLT split text and preserve HTML tags

    - by Lycaon
    I have an xml that has a description node: <config> <desc>A <b>first</b> sentence here. The second sentence with some link <a href="myurl">The link</a>. The <u>third</u> one.</desc> </config> I am trying to split the sentences using dot as separator but keeping in the same time in the HTML output the eventual HTML tags. What I have so far is a template that splits the description but the HTML tags are lost in the output due to the normalize-space and substring-before functions. My current template is given below: <xsl:template name="output-tokens"> <xsl:param name="sourceText" /> <!-- Force a . at the end --> <xsl:variable name="newlist" select="concat(normalize-space($sourceText), ' ')" /> <!-- Check if we have really a point at the end --> <xsl:choose> <xsl:when test ="contains($newlist, '.')"> <!-- Find the first . in the string --> <xsl:variable name="first" select="substring-before($newlist, '.')" /> <!-- Get the remaining text --> <xsl:variable name="remaining" select="substring-after($newlist, '.')" /> <!-- Check if our string is not in fact a . or an empty string --> <xsl:if test="normalize-space($first)!='.' and normalize-space($first)!=''"> <p><xsl:value-of select="normalize-space($first)" />.</p> </xsl:if> <!-- Recursively apply the template for the remaining text --> <xsl:if test="$remaining"> <xsl:call-template name="output-tokens"> <xsl:with-param name="sourceText" select="$remaining" /> </xsl:call-template> </xsl:if> </xsl:when> <!--If no . was found --> <xsl:otherwise> <p> <!-- If the string does not contains a . then display the text but avoid displaying empty strings --> <xsl:if test="normalize-space($sourceText)!=''"> <xsl:value-of select="normalize-space($sourceText)" />. </xsl:if> </p> </xsl:otherwise> </xsl:choose> </xsl:template> and I am using it in the following manner: <xsl:template match="config"> <xsl:call-template name="output-tokens"> <xsl:with-param name="sourceText" select="desc" /> </xsl:call-template> </xsl:template> The expected output is: <p>A <b>first</b> sentence here.</p> <p>The second sentence with some link <a href="myurl">The link</a>.</p> <p>The <u>third</u> one.</p>

    Read the article

  • Accessing Layout Items from inside Widget AppWidgetProvider

    - by cam4mav
    I am starting to go insane trying to figure this out. It seems like it should be very easy, I'm starting to wonder if it's possible. What I am trying to do is create a home screen widget, that only contains an ImageButton. When it is pressed, the idea is to change some setting (like the wi-fi toggle) and then change the Buttons image. I have the ImageButton declared like this in my main.xml <ImageButton android:id="@+id/buttonOne" android:src="@drawable/button_normal_ringer" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" /> my AppWidgetProvider class, named ButtonWidget * note that the RemoteViews class is a locally stored variable. this allowed me to get access to the RViews layout elements... or so I thought. @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { remoteViews = new RemoteViews(context.getPackageName(), R.layout.main); Intent active = new Intent(context, ButtonWidget.class); active.setAction(VIBRATE_UPDATE); active.putExtra("msg","TESTING"); PendingIntent actionPendingIntent = PendingIntent.getBroadcast(context, 0, active, 0); remoteViews.setOnClickPendingIntent(R.id.buttonOne, actionPendingIntent); appWidgetManager.updateAppWidget(appWidgetIds, remoteViews); } @Override public void onReceive(Context context, Intent intent) { // v1.5 fix that doesn't call onDelete Action final String action = intent.getAction(); Log.d("onReceive",action); if (AppWidgetManager.ACTION_APPWIDGET_DELETED.equals(action)) { final int appWidgetId = intent.getExtras().getInt( AppWidgetManager.EXTRA_APPWIDGET_ID, AppWidgetManager.INVALID_APPWIDGET_ID); if (appWidgetId != AppWidgetManager.INVALID_APPWIDGET_ID) { this.onDeleted(context, new int[] { appWidgetId }); } } else { // check, if our Action was called if (intent.getAction().equals(VIBRATE_UPDATE)) { String msg = "null"; try { msg = intent.getStringExtra("msg"); } catch (NullPointerException e) { Log.e("Error", "msg = null"); } Log.d("onReceive",msg); if(remoteViews != null){ Log.d("onReceive",""+remoteViews.getLayoutId()); remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); Log.d("onReceive", "tried to switch"); } else{ Log.d("F!", "--naughty language used here!!!--"); } } super.onReceive(context, intent); } } so, I've been testing this and the onReceive method works great, I'm able to send notifications and all sorts of stuff (removed from code for ease of reading) the one thing I can't do is change any properties of the view elements. To try and fix this, I made RemoteViews a local and static private variable. Using log's I was able to see that When multiple instances of the app are on screen, they all refer to the one instance of RemoteViews. perfect for what I'm trying to do The trouble is in trying to change the image of the ImageButton. I can do this from within the onUpdate method using this. remoteViews.setImageViewResource(R.id.buttonOne, R.drawable.button_pressed_ringer); that doesn't do me any good though once the widget is created. For some reason, even though its inside the same class, being inside the onReceive method makes that line not work. That line used to throw a Null pointer as a matter of fact, until I changed the variable to static. now it passes the null test, refers to the same layoutId as it did at the start, reads the line, but it does nothing. Its like the code isn't even there, just keeps chugging along. SO...... Is there any way to modify layout elements from within a widget after the widget has been created!? I want to do this based on the environment, not with a configuration activity launch. I've been looking at various questions and this seems to be an issue that really hasn't been solved, such as link text and link text oh and for anyone who finds this and wants a good starting tutorial for widgets, this is easy to follow (though a bit old, it gets you comfortable with widgets) .pdf link text hopefully someone can help here. I kinda have the feeling that this is illegal and there is a different way to go about this. I would LOVE to be told another approach!!!! Thanks

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Pluralsight Meet the Author Podcast on Structuring JavaScript Code

    - by dwahlin
    I had the opportunity to talk with Fritz Onion from Pluralsight about one of my recent courses titled Structuring JavaScript Code for one of their Meet the Author podcasts. We talked about why JavaScript patterns are important for building more re-useable and maintainable apps, pros and cons of different patterns, and how to go about picking a pattern as a project is started. The course provides a solid walk-through of converting what I call “Function Spaghetti Code” into more modular code that’s easier to maintain, more re-useable, and less susceptible to naming conflicts. Patterns covered in the course include the Prototype Pattern, Revealing Module Pattern, and Revealing Prototype Pattern along with several other tips and techniques that can be used. Meet the Author:  Dan Wahlin on Structuring JavaScript Code   The transcript from the podcast is shown below: [Fritz]  Hello, this is Fritz Onion with another Pluralsight author interview. Today we’re talking with Dan Wahlin about his new course, Structuring JavaScript Code. Hi, Dan, it’s good to have you with us today. [Dan]  Thanks for having me, Fritz. [Fritz]  So, Dan, your new course, which came out in December of 2011 called Structuring JavaScript Code, goes into several patterns of usage in JavaScript as well as ways of organizing your code and what struck me about it was all the different techniques you described for encapsulating your code. I was wondering if you could give us just a little insight into what your motivation was for creating this course and sort of why you decided to write it and record it. [Dan]  Sure. So, I got started with JavaScript back in the mid 90s. In fact, back in the days when browsers that most people haven’t heard of were out and we had JavaScript but it wasn’t great. I was on a project in the late 90s that was heavy, heavy JavaScript and we pretty much did what I call in the course function spaghetti code where you just have function after function, there’s no rhyme or reason to how those functions are structured, they just kind of flow and it’s a little bit hard to do maintenance on it, you really don’t get a lot of reuse as far as from an object perspective. And so coming from an object-oriented background in JAVA and C#, I wanted to put something together that highlighted kind of the new way if you will of writing JavaScript because most people start out just writing functions and there’s nothing with that, it works, but it’s definitely not a real reusable solution. So the course is really all about how to move from just kind of function after function after function to the world of more encapsulated code and more reusable and hopefully better maintenance in the process. [Fritz]  So I am sure a lot of people have had similar experiences with their JavaScript code and will be looking forward to seeing what types of patterns you’ve put forth. Now, a couple I noticed in your course one is you start off with the prototype pattern. Do you want to describe sort of what problem that solves and how you go about using it within JavaScript? [Dan]  Sure. So, the patterns that are covered such as the prototype pattern and the revealing module pattern just as two examples, you know, show these kind of three things that I harp on throughout the course of encapsulation, better maintenance, reuse, those types of things. The prototype pattern specifically though has a couple kind of pros over some of the other patterns and that is the ability to extend your code without touching source code and what I mean by that is let’s say you’re writing a library that you know either other teammates or other people just out there on the Internet in general are going to be using. With the prototype pattern, you can actually write your code in such a way that we’re leveraging the JavaScript property and by doing that now you can extend my code that I wrote without touching my source code script or you can even override my code and perform some new functionality. Again, without touching my code.  And so you get kind of the benefit of the almost like inheritance or overriding in object oriented languages with this prototype pattern and it makes it kind of attractive that way definitely from a maintenance standpoint because, you know, you don’t want to modify a script I wrote because I might roll out version 2 and now you’d have to track where you change things and it gets a little tricky. So with this you just override those pieces or extend them and get that functionality and that’s kind of some of the benefits that that pattern offers out of the box. [Fritz]  And then the revealing module pattern, how does that differ from the prototype pattern and what problem does that solve differently? [Dan]  Yeah, so the prototype pattern and there’s another one that’s kind of really closely lined with revealing module pattern called the revealing prototype pattern and it also uses the prototype key word but it’s very similar to the one you just asked about the revealing module pattern. [Fritz]  Okay. [Dan]  This is a really popular one out there. In fact, we did a project for Microsoft that was very, very heavy JavaScript. It was an HMTL5 jQuery type app and we use this pattern for most of the structure if you will for the JavaScript code and what it does in a nutshell is allows you to get that encapsulation so you have really a single function wrapper that wraps all your other child functions but it gives you the ability to do public versus private members and this is kind of a sort of debate out there on the web. Some people feel that all JavaScript code should just be directly accessible and others kind of like to be able to hide their, truly their private stuff and a lot of people do that. You just put an underscore in front of your field or your variable name or your function name and that kind of is the defacto way to say hey, this is private. With the revealing module pattern you can do the equivalent of what objective oriented languages do and actually have private members that you literally can’t get to as an external consumer of the JavaScript code and then you can expose only those members that you want to be public. Now, you don’t get the benefit though of the prototype feature, which is I can’t easily extend the revealing module pattern type code if you don’t like something I’m doing, chances are you’re probably going to have to tweak my code to fix that because we’re not leveraging prototyping but in situations where you’re writing apps that are very specific to a given target app, you know, it’s not a library, it’s not going to be used in other apps all over the place, it’s a pattern I actually like a lot, it’s very simple to get going and then if you do like that public/private feature, it’s available to you. [Fritz]  Yeah, that’s interesting. So it’s almost, you can either go private by convention just by using a standard naming convention or you can actually enforce it by using the prototype pattern. [Dan]  Yeah, that’s exactly right. [Fritz]  So one of the things that I know I run across in JavaScript and I’m curious to get your take on is we do have all these different techniques of encapsulation and each one is really quite different when you’re using closures versus simply, you know, referencing member variables and adding them to your objects that the syntax changes with each pattern and the usage changes. So what would you recommend for people starting out in a brand new JavaScript project? Should they all sort of decide beforehand on what patterns they’re going to stick to or do you change it based on what part of the library you’re working on? I know that’s one of the points of confusion in this space. [Dan]  Yeah, it’s a great question. In fact, I just had a company ask me about that. So which one do I pick and, of course, there’s not one answer fits all. [Fritz]  Right. [Dan]  So it really depends what you just said is absolutely in my opinion correct, which is I think as a, especially if you’re on a team or even if you’re just an individual a team of one, you should go through and pick out which pattern for this particular project you think is best. Now if it were me, here’s kind of the way I think of it. If I were writing a let’s say base library that several web apps are going to use or even one, but I know that there’s going to be some pieces that I’m not really sure on right now as I’m writing I and I know people might want to hook in that and have some better extension points, then I would look at either the prototype pattern or the revealing prototype. Now, really just a real quick summation between the two the revealing prototype also gives you that public/private stuff like the revealing module pattern does whereas the prototype pattern does not but both of the prototype patterns do give you the benefit of that extension or that hook capability. So, if I were writing a library that I need people to override things or I’m not even sure what I need them to override, I want them to have that option, I’d probably pick a prototype, one of the prototype patterns. If I’m writing some code that is very unique to the app and it’s kind of a one off for this app which is what I think a lot of people are kind of in that mode as writing custom apps for customers, then my personal preference is the revealing module pattern you could always go with the module pattern as well which is very close but I think the revealing module patterns a little bit cleaner and we go through that in the course and explain kind of the syntax there and the differences. [Fritz]  Great, that makes a lot of sense. [Fritz]  I appreciate you taking the time, Dan, and I hope everyone takes a chance to look at your course and sort of make these decisions for themselves in their next JavaScript project. Dan’s course is, Structuring JavaScript Code and it’s available now in the Pluralsight Library. So, thank you very much, Dan. [Dan]  Thanks for having me again.

    Read the article

  • We Convert your PSD into Xhtml

    - by Aditi
    From last few months we have been receiving a lot of inquires for  Psd into Xhtml projects, while we were majorly focusing on custom WordPress, Magento, Drupal & Joomla Projects. Now we are offering PSD into Xhtml/CSS service at an affordable price looking at its demand. We also will cater PSD into any CMS, like wordpress, Drupal, Magento or Joomla. Our custom services will continue as it is. It is very convenient to get your design converted by our Xhtml & CSS experts. We assure 24 hour delivery time. At JustSkins, we have a structured conversion model that works well for any kind of potentially enriched web business solution. Our customized slicing guidelines, besides, W3C approved XHTML and CSS code naming conventions makes us stand distinct from the competitors. Why Should You Let us do it for you? W3C Compliant HTML/XHTML and CSS Codes Well Structured and Written Code. Clean and Hand Coded Mark up no use of WYSIWYG. We offer Fast turn around timeDesign converted into Xhtml/CSS just in one business day. Multi- Browser Accessible Websites Cross-Platform Support. Excellent Customer Service. Affordable We at JustSkins are team of efficient programmers with vast experience in templating for   content management systems (CMS),  Joomla, Drupal, WordPress and other Open Source technologies. Contact us today for your requirement!

    Read the article

  • New Fusion Community, Community Name Changes and Upcoming Webcasts

    - by cwarticki
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Check out the new MOS Customer Relationship Management (CRM) community. This community has been featured in marketing events and is one of the more active communities so far. Support has also renamed the Fusion HCM community (now Human Capital Management (HCM)) and the Technical – FA community (now Fusion Applications Technology) in order to standardize our naming convention. Finally, we have two upcoming webcasts: 18-OCT-2012 : Fusion Apps Security - User & Role Management using Oracle Identity Manager featured in our Fusion Applications Technology community 01-NOV-2012: Fusion Apps Security – Troubleshoot Data Role Issues featured in our Fusion Applications Technology community. Check out our new Community. Attend our upcoming webcasts. Participate.  Engage. Contribute. ~Chris

    Read the article

  • Don’t learn SSDT, learn about your databases instead

    - by jamiet
    Last Thursday I presented my session “Introduction to SSDT” at the SQL Supper event held at the offices of 7 Digital (loved the samosas, guys). I did my usual spiel, tour of the IDE, connected development, declarative database development yadda yadda yadda… and at the end asked if there were any questions. One gentleman in attendance (sorry, can’t remember your name) raised his hand and stated that by attempting to evangelise all of the features I’d missed the single biggest benefit of SSDT, that it can tell you stuff about database that you didn’t already know. I realised that he was dead right. SSDT allows you to import your whole database schema into a new project and it will instantly give you a list of errors and/or warnings pertaining to the objects in your database. Invalid references (e.g a long-forgotten stored procedure that refers to a non-existent column), unnecessary 3-part naming, incorrect case usage, syntax errors…it’ll tell you about all of ‘em! Turn on static code analysis (this article shows you how) and you’ll learn even more such as any stored procedures that begin with “sp_”, WHERE clauses that will kill performance, use of @@IDENTITY instead of SCOPE_IDENTITY(), use of deprecated syntax, implicit casts etc…. the list goes on and on. I urge you to download and install SSDT (takes a few minutes, its free and you don’t need SQL Server or Visual Studio pre-installed), start a new project: right-click on your new project and import from your database: and see what happens: You may be surprised what you discover. Let me know in the comments below what results you get, total number of objects, number of errors/warnings, I’d be interested to know! @Jamiet

    Read the article

  • Time Warp

    - by Jesse
    It’s no secret that daylight savings time can wreak havoc on systems that rely heavily on dates. The system I work on is centered around recording dates and times, so naturally my co-workers and I have seen our fair share of date-related bugs. From time to time, however, we come across something that we haven’t seen before. A few weeks ago the following error message started showing up in our logs: “The supplied DateTime represents an invalid time. For example, when the clock is adjusted forward, any time in the period that is skipped is invalid.” This seemed very cryptic, especially since it was coming from areas of our application that are typically only concerned with capturing date-only (no explicit time component) from the user, like reports that take a “start date” and “end date” parameter. For these types of parameters we just leave off the time component when capturing the date values, so midnight is used as a “placeholder” time. How is midnight an “invalid time”? Globalization Is Hard Over the last couple of years our software has been rolled out to users in several countries outside of the United States, including Brazil. Brazil begins and ends daylight savings time at midnight on pre-determined days of the year. On October 16, 2011 at midnight many areas in Brazil began observing daylight savings time at which time their clocks were set forward one hour. This means that at the instant it became midnight on October 16, it actually became 1:00 AM, so any time between 12:00 AM and 12:59:59 AM never actually happened. Because we store all date values in the database in UTC, always adjust any “local” dates provided by a user to UTC before using them as filters in a query. The error we saw was thrown by .NET when trying to convert the Brazilian local time of 2011-10-16 12:00 AM to UTC since that local time never actually existed. We hadn’t experienced this same issue with any of our US customers because the daylight savings time changes in the US occur at 2:00 AM which doesn’t conflict with our “placeholder” time of midnight. Detecting Invalid Times In .NET you might use code similar to the following for converting a local time to UTC: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); The code above throws the “invalid time” exception referenced above. We could try to detect whether or not the local time is invalid with something like this: var localDate = new DateTime(2011, 10, 16); //2011-10-16 @ midnight const string timeZoneId = "E. South America Standard Time"; //Windows system timezone Id for "Brasilia" timezone. var localTimeZone = TimeZoneInfo.FindSystemTimeZoneById(timeZoneId); if (localTimeZone.IsInvalidTime(localDate)) localDate = localDate.AddHours(1); var convertedDate = TimeZoneInfo.ConvertTimeToUtc(localDate, localTimeZone); This code works in this particular scenario, but it hardly seems robust. It also does nothing to address the issue that can arise when dealing with the ambiguous times that fall around the end of daylight savings. When we roll the clocks back an hour they record the same hour on the same day twice in a row. To continue on with our Brazil example, on February 19, 2012 at 12:00 AM, it will immediately become February 18, 2012 at 11:00 PM all over again. In this scenario, how should we interpret February 18, 2011 11:30 PM? Enter Noda Time I heard about Noda Time, the .NET port of the Java library Joda Time, a little while back and filed it away in the back of my mind under the “sounds-like-it-might-be-useful-someday” category.  Let’s see how we might deal with the issue of invalid and ambiguous local times using Noda Time (note that as of this writing the samples below will only work using the latest code available from the Noda Time repo on Google Code. The NuGet package version 0.1.0 published 2011-08-19 will incorrectly report unambiguous times as being ambiguous) : var localDateTime = new LocalDateTime(2011, 10, 16, 0, 0); const string timeZoneId = "Brazil/East"; var timezone = DateTimeZone.ForId(timeZoneId); var localDateTimeMaping = timezone.MapLocalDateTime(localDateTime); ZonedDateTime unambiguousLocalDateTime; switch (localDateTimeMaping.Type) { case ZoneLocalMapping.ResultType.Unambiguous: unambiguousLocalDateTime = localDateTimeMaping.UnambiguousMapping; break; case ZoneLocalMapping.ResultType.Ambiguous: unambiguousLocalDateTime = localDateTimeMaping.EarlierMapping; break; case ZoneLocalMapping.ResultType.Skipped: unambiguousLocalDateTime = new ZonedDateTime( localDateTimeMaping.ZoneIntervalAfterTransition.Start, timezone); break; default: throw new InvalidOperationException(string.Format("Unexpected mapping result type: {0}", localDateTimeMaping.Type)); } var convertedDateTime = unambiguousLocalDateTime.ToInstant().ToDateTimeUtc(); Let’s break this sample down: I’m using the Noda Time ‘LocalDateTime’ object to represent the local date and time. I’ve provided the year, month, day, hour, and minute (zeros for the hour and minute here represent midnight). You can think of a ‘LocalDateTime’ as an “invalidated” date and time; there is no information available about the time zone that this date and time belong to, so Noda Time can’t make any guarantees about its ambiguity. The ‘timeZoneId’ in this sample is different than the ones above. In order to use the .NET TimeZoneInfo class we need to provide Windows time zone ids. Noda Time expects an Olson (tz / zoneinfo) time zone identifier and does not currently offer any means of mapping the Windows time zones to their Olson counterparts, though project owner Jon Skeet has said that some sort of mapping will be publicly accessible at some point in the future. I’m making use of the Noda Time ‘DateTimeZone.MapLocalDateTime’ method to disambiguate the original local date time value. This method returns an instance of the Noda Time object ‘ZoneLocalMapping’ containing information about the provided local date time maps to the provided time zone.  The disambiguated local date and time value will be stored in the ‘unambiguousLocalDateTime’ variable as an instance of the Noda Time ‘ZonedDateTime’ object. An instance of this object represents a completely unambiguous point in time and is comprised of a local date and time, a time zone, and an offset from UTC. Instances of ZonedDateTime can only be created from within the Noda Time assembly (the constructor is ‘internal’) to ensure to callers that each instance represents an unambiguous point in time. The value of the ‘unambiguousLocalDateTime’ might vary depending upon the ‘ResultType’ returned by the ‘MapLocalDateTime’ method. There are three possible outcomes: If the provided local date time is unambiguous in the provided time zone I can immediately set the ‘unambiguousLocalDateTime’ variable from the ‘Unambiguous Mapping’ property of the mapping returned by the ‘MapLocalDateTime’ method. If the provided local date time is ambiguous in the provided time zone (i.e. it falls in an hour that was repeated when moving clocks backward from Daylight Savings to Standard Time), I can use the ‘EarlierMapping’ property to get the earlier of the two possible local dates to define the unambiguous local date and time that I need. I could have also opted to use the ‘LaterMapping’ property in this case, or even returned an error and asked the user to specify the proper choice. The important thing to note here is that as the programmer I’ve been forced to deal with what appears to be an ambiguous date and time. If the provided local date time represents a skipped time (i.e. it falls in an hour that was skipped when moving clocks forward from Standard Time to Daylight Savings Time),  I have access to the time intervals that fell immediately before and immediately after the point in time that caused my date to be skipped. In this case I have opted to disambiguate my local date and time by moving it forward to the beginning of the interval immediately following the skipped period. Again, I could opt to use the end of the interval immediately preceding the skipped period, or raise an error depending on the needs of the application. The point of this code is to convert a local date and time to a UTC date and time for use in a SQL Server database, so the final ‘convertedDate’  variable (typed as a plain old .NET DateTime) has its value set from a Noda Time ‘Instant’. An 'Instant’ represents a number of ticks since 1970-01-01 at midnight (Unix epoch) and can easily be converted to a .NET DateTime in the UTC time zone using the ‘ToDateTimeUtc()’ method. This sample is admittedly contrived and could certainly use some refactoring, but I think it captures the general approach needed to take a local date and time and convert it to UTC with Noda Time. At first glance it might seem that Noda Time makes this “simple” code more complicated and verbose because it forces you to explicitly deal with the local date disambiguation, but I feel that the length and complexity of the Noda Time sample is proportionate to the complexity of the problem. Using TimeZoneInfo leaves you susceptible to overlooking ambiguous and skipped times that could result in run-time errors or (even worse) run-time data corruption in the form of a local date and time being adjusted to UTC incorrectly. I should point out that this research is my first look at Noda Time and I know that I’ve only scratched the surface of its full capabilities. I also think it’s safe to say that it’s still beta software for the time being so I’m not rushing out to use it production systems just yet, but I will definitely be tinkering with it more and keeping an eye on it as it progresses.

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >

    Read the article

  • CodePlex Daily Summary for Wednesday, June 09, 2010

    CodePlex Daily Summary for Wednesday, June 09, 2010New Projects.NET Transactional File Manager: Transactional File Manager is a .NET API that supports including file system operations such as file copy, move, delete in a transaction. It's an i...3D World Studio Content Pipeline for Windows Phone 7: This is a port of PhotonicGames' project: http://xna3dws.codeplex.com/releases/view/42994 for the Windows Phone 7 tools (XNA 4.0 CTP).Advanced Script Editor for 3D Rad: Advanced Script Editor makes it easier for 3D Rad coders to write scripts. Developed in C#, it features a functions list, a favourites list, object...Ajax ASP.Net Forum: A fast & lightweight open source free forum developed in ASP.Net 3.5, AJAX, CSS, SQL & Javascript Cache (filter-sort-move through table records at ...Axon: Axon is the home automation system that I will be running in my home. It will be a collection of different technologies and projects, often experim...BigBallz: Projeto de site de Bolões para campeonatos diversos. A princípio pensado para copa do mundo de futebol de 2010BigfootMVC: MVC Framework for DotNetNukeBigfootSQL: A StringBuilder for SQL. BigfootSQL was built with simplicity in mind. It assumes that you are comfortable writing SQL but dislike effort required ...Bxf (Basic XAML Framework): Basic Xaml Framework (Bxf) is a simple, streamlined set of UI components designed to demonstrate the minimum framework functionality required to ma...elZerf - elektronische Zeiterfassung: elektronisches Zeiterfassungsystem im Rahmen der Seminararbeit im Modul Web-Anwendungsprogrammierung.IntoFactories.Net - Samples: Project to host samples created by members of the IntoFactories.NET Team blog.Lanchonete: Sistema para controle de lanchonetes. Medieval Dynasties: Medieval Dynasties is a game written in C# 3.5 and XNA 3.1 at the moment. It is inspired by Crusader Kings, Total War and Civilization.PMMsg: A project to replace the standard messaging client on the Windows Mobile platform. Mainly geared towards Windows Mobile 6.5.3 VGA devices. Also an...PunkPong: PunkPong is an open source "Pong" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard or mouse. This cross-platform a...Renegade Legion Fighter Calculator: In working on assigning fighters to squadrons, flights, and groups for a campaign, I was struck by the sheer amount of calculations I had to make. ...Sharpotify - Spotify .Net Library: Sharpotify is a Spotify library in C#. It is based in Jotify and SharPot projects. It is not a libspotify wrapper, It is a full .Net Spotify protoc...Silverlight load on demand with MEF: With MEF, a Silverlight control can be split in several packages(xap files). Each package can contain one or more pages and it will download on dem...SOLID by example: Source code examples to undestood solid design principles. Most of them were taken from http://www.lostechies.com/SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: A version of RS.EXE that you can use with Forms Authentication in Native Mode. Use the following arguments to specify credentials (just like Basic ...Stripper: Stripper Remove Diacritics and other unwanted caracter to fabric a more standardized file naming.study: studyUncoverPIC: UncoverPIC is a Silverlight Game strongly inspired to the famous Arcade Game "GalsPanic" (see http://en.wikipedia.org/wiki/Gals_Panic ). It was dev...Unity3D Untitled MMO: Unity3D Untitled MMO FrameworkUnnamedShop: UnnamedShopXBStudio.asp.net.automation: A Unit Testing Automation library for asp.netXBStudio.Web: XBStudio Web ApplicationNew Releases3D World Studio Content Pipeline for Windows Phone 7: Initial Release (0.1): This is the first release of the project, with plenty of hackery and kludges to go around, but it mostly works! Let me know if you hit any bugs.Acies: Acies - Alpha Build 0.0.10: Alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249-4560-...Advanced Script Editor for 3D Rad: Advanced Script Editor - Version 2.6: Despite various previous releases on the 3D Rad forum, this is the first release on CodePlex.Ajax ASP.Net Forum: First Release: First Release prior to CodePlex Publish (send to admins)So, it doesn't all finish VERSION: 0.1.2 FEATURES Main Home Where all the Forums (called ...Artist Follower for Microsoft Access: Artist Follower 0.5.1: Artists Follower changes: Just one form to manage artists and links!!!Artist Follower for Microsoft Access: Artists Follower 0.5.0: This is the first release of Artist Follower.ASP.NET MVC SiteMap provider - MvcSiteMapProvider: MvcSiteMapProvider 2.0.0 CTP1: This is a community technology preview of MvcSiteMapProvider version 2.0. It is not backwards compatible with older MvcSiteMapProvider versions. ...B&W Port Scanner: Black`n`White Port Scanner 4.0: Version 4 includes: - Improved vulnerability detection tools - Report Creation - Improved Stability - Much better port information database - Nume...BaseCalendar: BaseControls 1.1: BaseControls 1.1 contains the BaseCalendar ASP.NET control. Changes: Rendering TH by default inside THEAD. Added option (ShowMinNumWeeks) to r...BigfootSQL: BigfootSQL Source Code: BigfootSQL C# Version 01Commerce Server 2009 Orders using Pipelines in a Console Application: ConsoleApplication To PLace Orders: ConsoleApplication To PLace Orders with Commerce Server 2009 foundationCommunity Forums NNTP bridge: Community Forums NNTP Bridge V33: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Community Forums NNTP bridge: Community Forums NNTP Bridge V34: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...ContainerOne - C# application server: V0.1.2.0: New minor release containing: Integration test component for runtime testing Refactored and cleaned solution files First unit testsExtend SmallBasic: Teaching Extensions v.020: Moved Tortoise.approve to ProgramWindow.TakeScreenShot()fleet It: v0.06 Alpha: v0.06 Alpha - Features Caching implemented for fleets Various Bug fixes Implemented Settings. Resolved logical issue with Getting fleets U...Frotz.NET: Frotz.NET B2: In addition to B1 changes: - Added ZTools to enable debugging view of zcode files - Added rudimentary scroll back buffer. B1 Changes: - Got Adapt...FsObserver: FsObserver 2.0: This is basically the same as FsObserver 1.0 but the "-help" documentation has been cleaned up somewhat and the code has been refactored so that it...GPdotNET - Genetic Programming Tool: GPdotNETv1.0: GPdotNET v.1.0 - more details on http.bhrnjica.wordpress.com/gpdotnetHERB.IQ: Alpha 0.1 Source code release 8: Alpha 0.1 Source code release 8imdb movie downloader: myImdb 0.9.3: myImdb 0.9.3imdb movie downloader: myImdb 0.9.4: myImdb 0.9.4jccc .NET smart framework: jccc .NET smart framework version 1.2010.06.07: jccc .NET smart framework version 1.2010.06.07 added oracle databases supportLongBar: LongBar 2.1 Build 313: - Fixed library and updates to work with updated live services - Options: You can disable shadow nowMDownloader: MDownloader-0.15.17.59623: Fixed FileFactory provider. Improvied postpone policies. Added network request limiter.MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1: Installer for MediaCoder.NET v1.0 beta1.1. It can now convert files with spaces in the path or filename. I have also created filter for the SaveFil...MediaCoder.NET: MediaCoder.NET v1.0 beta 1.1 Source Code: Source Code for MediaCoder.NET v1.0 beta 1.1.mesoBoard: mesoBoard - 0.9.1 beta: Fixed file download permissions Released under the New BSD License.MPCLI: Alpha Release (0.1.0.0): This release has core functionality and is considered a potential candidate for a feature complete release of this library. However, suggestions fo...N2 CMS: 2.0: N2 is a lightweight CMS framework for ASP.NET. It helps professional developers build great web sites that anyone can update. Major Changes (1.5 -...NHTrace: NHTrace-47571: NHTrace-47571NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.125: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NSoup: NSoup 0.2: NSoup 0.2 corresponds to jsoup version 1.1.1. List of changes can be viewed here.Opalis Community Releases: Integration Pack for Data Manipulation: The Integration Pack for Data Manipulation enables you to perform a wider variety of data manipulation tasks as well as aggregate data into common ...Performance Analysis of Logs (PAL) Tool: PAL v2.0 Beta 1: Fixed Counter Sorting: Fixed a minor bug where duplicate counter expression paths were not being removed. Analysis Added: Added LogicalDisk Read/...RoTwee: RoTwee (12.0.0.0): Trial version. 17925 Make it possible to change window sizeSharpotify - Spotify .Net Library: Sharpotify.Library 1.0: Sharpotify Library: Stable release. You can connect with spotify, search, browse (tracks, albums, artists), get a music stream, create and edit you...Silverlight load on demand with MEF: mal.Web.Silverlight.MEF 1.0.0.0: mal.Web.Silverlight.MEF 1.0.0.0sMAPtool: sMAPtool v0.7e (without Maps): + Added: color value expansion bar for hmap (right click to select color scheme) + Added: more complex hmap editing, uses now 4 point bounding rect...SQL Server 2008 Reporting Services RS.EXE Supporting Forms Authentication: Initial release: Enjoy!Stripper: Stripper 0.1.1 (CLi): Stripper Remove Diacritics and other unwanted caracters to fabric a more standardized file naming. Especially French caracter and maybe other lang...Unity3D Untitled MMO: v1: versionUrzaGatherer: UrzaGatherer 2.01a: New version with some minors bugs corrected.VCC: Latest build, v2.1.30608.0: Automatic drop of latest buildVCC: Latest build, v2.1.30608.1: Automatic drop of latest buildWatermarker.NET: 0.1.3811: A newer version with some improvements. I release this as a .zip archive, because settings are added here, so there will be .exe and .config files.Yet Another GPS: Alfa Release: Alfa working releaseMost Popular ProjectsDozer Enterprise Library for .NETEmployee Management SystemWiiMote PhysicsVisualStudio 2010 JavaScript OutliningSpider CompilerConcurrent CacheOil Slick Live FeedsCSUFVGDC Summer JamWinGetSiteMap Utility for DNN Blog ModuleMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryRhyduino - Arduino and Managed CodejQuery Library for SharePoint Web ServicesRawrNB_Store - Free DotNetNuke Ecommerce Catalog ModuleAndrew's XNA HelpersBlogEngine.NETStyleCopCustomer Portal Accelerator for Microsoft Dynamics CRM

    Read the article

  • How to port email from evolution to thunderbird?

    - by jim
    I updated ubuntu to 11.10 using the update notification. I am also switching from Xubuntu to ubuntu - gnome interface. I have been using evolution for years and would like to port the emails to thunderbird. I have looked at the similar questions with no luck and the thunderbird help on manually importing. Most of these assume that the evolution file structure is similar to the evolution file structure. When I set up thunderbird it seems to have imported the contacts from evolution (and actually removed them from evolution. However no mail got transferred. I found the evolution mail in ~/.local/share/evolution/mail/local . this has folders.db and 3 directories - cur ,tmp, and new. then there are the hidden files and directories. Each directory has three related files with extensions .cmeta, .ibex.index, and .ibex.index.data. Then all the directories had files that seem to contain the individual messages. I have not looked at rhyme or reason to the file numbering/naming scheme. is there a nice way to import these files?

    Read the article

  • Get the Latest on MySQL Enterprise Edition

    - by monica.kumar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Font Definitions */ @font-face {font-family:Wingdings; panose-1:5 0 0 0 0 0 0 0 0 0; mso-font-charset:2; mso-generic-font-family:auto; mso-font-pitch:variable; mso-font-signature:0 268435456 0 0 -2147483648 0;}@font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4; mso-font-charset:1; mso-generic-font-family:roman; mso-font-format:other; mso-font-pitch:variable; mso-font-signature:0 0 0 0 0 0;}@font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4; mso-font-charset:0; mso-generic-font-family:swiss; mso-font-pitch:variable; mso-font-signature:-1610611985 1073750139 0 0 159 0;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-unhide:no; mso-style-qformat:yes; mso-style-parent:""; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; mso-themecolor:hyperlink; text-decoration:underline; text-underline:single;}a:visited, span.MsoHyperlinkFollowed {mso-style-noshow:yes; mso-style-priority:99; color:purple; mso-themecolor:followedhyperlink; text-decoration:underline; text-underline:single;}p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:.5in; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.5in; margin-bottom:.0001pt; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:0in; margin-left:.5in; margin-bottom:.0001pt; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast {mso-style-priority:34; mso-style-unhide:no; mso-style-qformat:yes; mso-style-type:export-only; margin-top:0in; margin-right:0in; margin-bottom:10.0pt; margin-left:.5in; mso-add-space:auto; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}.MsoChpDefault {mso-style-type:export-only; mso-default-props:yes; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}.MsoPapDefault {mso-style-type:export-only; margin-bottom:10.0pt; line-height:115%;}@page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;}div.WordSection1 {page:WordSection1;} /* List Definitions */ @list l0 {mso-list-id:595597020; mso-list-type:hybrid; mso-list-template-ids:1001697690 67698689 67698691 67698693 67698689 67698691 67698693 67698689 67698691 67698693;}@list l0:level1 {mso-level-number-format:bullet; mso-level-text:?; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-.25in; font-family:Symbol;}ol {margin-bottom:0in;}ul {margin-bottom:0in;}--Oracle just announced MySQL 5.5 Enterprise Edition. MySQL Enterprise Edition is a comprehensive subscription that includes:- MySQL Database- MySQL Enterprise Backup- MySQL Enterprise Monitor- MySQL Workbench- Oracle Premier Support; 24x7, WorldwideNew in this release is the addition of MySQL Enterprise Backup and MySQL Workbench along with enhancements in MySQL Enterprise Monitor. Recent integration with MyOracle Support allows MySQL customers to access the same support infrastructure used for Oracle Database customers. Joint MySQL and Oracle customers can experience faster problem resolution by using a common technical support interface. Supporting multiple operating systems, including Linux and Windows, MySQL Enterprise Edition can enable customers to achieve up to 90 percent TCO savingsover Microsoft SQL Server. See what Booking.com is saying:“With more than 50 million unique monthly visitors, performance and uptime are our first priorities,” said Bert Lindner, Senior Systems Architect, Booking.com. “The MySQL Enterprise Monitor is an essential tool to monitor, tune and manage our many MySQL instances. It allows us to zoom in quickly on the right areas, so we can spend our time and resources where it matters.” Read the press release for detailson technology enhancements.

    Read the article

  • Data Quality Through Data Governance

    Data Quality Governance Data quality is very important to every organization, bad data cost an organization time, money, and resources that could be prevented if the proper governance was put in to place.  Data Governance Program Criteria: Support from Executive Management and all Business Units Data Stewardship Program  Cross Functional Team of Data Stewards Data Governance Committee Quality Structured Data It should go without saying but any successful project in today’s business world must get buy in from executive management and all stakeholders involved with the project. If management does not fully support a project because they see it is in there and the company’s best interest then they will remove/eliminate funding, resources and allocated time to work on the project. In essence they can render a project dead until it is official killed by the business. In addition, buy in from stake holders is also very important because they can cause delays increased spending in time, money and resources because they do not support a project. Data Stewardship programs are administered by a data steward manager who primary focus is to support, train and manage a cross functional data stewards team. A cross functional team of data stewards are pulled from various departments act to ensure that all systems work to ensure that an organization’s goals are achieved. Typically, data stewards are subject matter experts that act as mediators between their respective departments and IT. Data Quality Procedures Data Governance Committees are composed of data stewards, Upper management, IT Leadership and various subject matter experts depending on a company. The primary goal of this committee is to define strategic goals, coordinate activities, set data standards and offer data guidelines for the business. Data Quality Policies In 1997, Claudia Imhoff defined a Data Stewardship’s responsibility as to approve business naming standards, develop consistent data definitions, determine data aliases, develop standard calculations and derivations, document the business rules of the corporation, monitor the quality of the data in the data warehouse, define security requirements, and so forth. She further explains data stewards responsible for creating and enforcing polices on the following but not limited to issues. Resolving Data Integration Issues Determining Data Security Documenting Data Definitions, Calculations, Summarizations, etc. Maintaining/Updating Business Rules Analyzing and Improving Data Quality

    Read the article

  • What is the best strategy for transforming unicode strings into filenames?

    - by David Cowden
    I have a bunch (thousands) of resources in an RDF/XML file. I am writing a certain subset of the resources to files -- one file for each, and I'm using the resource's title property as the file name. However, the titles are every day article, website, and blog post titles, so they contain characters unsafe for a URI (the necessary step for constructing a valid file path). I know of the Jersey UriBuilder but I can't quite get it to work for my needs as I detailed in a different question on SO. Some possibilities I have considered are: Since each resource should also have an associated URL, I could try to use the name of the file on the server. The down side of this is sometimes people don't name their content logically and I think the title of an article better reflects the content that will be in each text file. Construct a white list of valid characters and parse the string myself defining substitutions for unsafe characters. The downside of this is the result could be just as unreadable as the former solution because presumably the content creators went through a similar process when placing the files on their server. Choose a more generic naming scheme, place the title in the text file along with the other attributes, and tell my boss to live with it. So my question here is, what methods work well for dealing with a scenario where you need to construct file names out of strings with potentially unsafe characters? Is there a solution that better fills out my constraints?

    Read the article

  • C# with keyword equivalent

    - by oazabir
    There’s no with keyword in C#, like Visual Basic. So you end up writing code like this: this.StatusProgressBar.IsIndeterminate = false; this.StatusProgressBar.Visibility = Visibility.Visible; this.StatusProgressBar.Minimum = 0; this.StatusProgressBar.Maximum = 100; this.StatusProgressBar.Value = percentage; Here’s a work around to this: With.A<ProgressBar>(this.StatusProgressBar, (p) => { p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; }); Saves you repeatedly typing the same class instance or control name over and over again. It also makes code more readable since it clearly says that you are working with a progress bar control within the block. It you are setting properties of several controls one after another, it’s easier to read such code this way since you will have dedicated block for each control. It’s a very simple one line function that does it: public static class With { public static void A<T>(T item, Action<T> work) { work(item); } } You could argue that you can just do this: var p = this.StatusProgressBar; p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; But it’s not elegant. You are introducing a variable “p” in the local scope of the whole function. This goes against naming conventions. Morever, you can’t limit the scope of “p” within a certain place in the function.

    Read the article

  • Recommended method for XML level loading in XNA

    - by David Saltares Márquez
    I want to use Blender as my level designer tool for an XNA game. Using an existing plugin, I can export my levels to DotScene format which is basically an xml file like this one: <scene formatVersion="1.0.0"> <nodes> <node name="scene-staircase.001"> <position x="10.500000" y="1.400000" z="-9.600000"/> <quaternion x="0.000000" y="0.000000" z="-0.000000" w="1.000000"/> <scale x="1.000000" y="1.000000" z="1.000000"/> <entity name="scene-staircase.001" meshFile="staircase.mesh"/> </node> <node name="Lamp.003"> <position x="11.024290" y="5.903862" z="9.658987"/> <quaternion x="-0.284166" y="0.726942" z="0.342034" w="0.523275"/> <scale x="1.000000" y="1.000000" z="1.000000"/> <light name="Spot.003" type="point"> <colourDiffuse r="0.400000" g="0.154618" b="0.145180"/> <colourSpecular r="0.400000" g="0.154618" b="0.145180"/> <lightAttenuation range="5000.0" constant="1.000000" linear="0.033333" quadratic="0.000000"/> </light> </node> ... </nodes> </scene> Using naming conventions I could easily parse the file and load the correspondent in game content. I am new to XNA and I have seen that there are several methods to load XML files into a game like serializing and deserializing. Which one would you recommend?

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >