Search Results

Search found 19992 results on 800 pages for 'font size'.

Page 310/800 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • How to set the default command prompt properties in Windows 7?

    - by Tom
    I have a command prompt shortcut which I customized to have a different font, etc. than the default. It works well, but when I start a batch file with the task scheduler then it uses the default command prompt settings to display batch progress. How can I customize the default command prompt which the system uses to have the same settings as my customized shortcut?

    Read the article

  • Is it possible to change the look and feel of remote X applications running under Xming?

    - by Rasive
    I am running Eclipse remotely right now, in Xming on my Windows pc, through an ssh tunnel from my laptop running Ubuntu 11.10. As seen below, it doesn't look that bad, but it seems that my applications defaults to the standard theme when it cannot find any others for GTK+ applications. Is there anything I can do about this? Also it would be nice if I could do something about the font settings to make it more easily readable.

    Read the article

  • CSS Dropdown Menu issues

    - by Simon Hume
    Can anyone help with a small problem. I've got a nice simple CSS dropdown menu http://www.cinderellahair.co.uk/new/CSSDropdown.html The problem I have is when you rollover a menu item that has children which are wider than the content, it pushes the whole menu right. Aside of shortening the child menu links down, is there any tweak I can make to my CSS to stop this happening? CSS Code: /* General */ #cssdropdown, #cssdropdown ul { list-style: none; } #cssdropdown, #cssdropdown * { padding: 0; margin: 0; } #cssdropdown {padding:43px 0px 0px 0px;} /* Head links */ #cssdropdown li.headlink { margin:0px 40px 0px -1px; float: left; background-color: #e9e9e9;} #cssdropdown li.headlink a { display: block; padding: 0px 0px 0px 5px; text-decoration:none; } #cssdropdown li.headlink a:hover { text-decoration:underline; } /* Child lists and links */ #cssdropdown li.headlink ul { display: none; text-align: left; padding:10px 0px 0px 0px; font-size:12px; float:left;} #cssdropdown li.headlink:hover ul { display: block; } #cssdropdown li.headlink ul li a { padding: 5px; height: 17px; } #cssdropdown li.headlink ul li a:hover { background-color: #333; } /* Pretty styling */ body { font-family:Georgia, "Times New Roman", Times, serif; font-size: 16px; } #cssdropdown a { color: grey; } #cssdropdown ul li a:hover { text-decoration: none; } #cssdropdown li.headlink { background-color: white; } #cssdropdown li.headlink ul { padding-bottom: 10px;} HTML: <ul id="cssdropdown"> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/index.php">HOME</a></li> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/gallery/gallery.php">GALLERY</a> <ul> <li><a href="http://amazon.com/">CELEBRITY</a></li> <li><a href="http://ebay.com/">BEFORE &amp; AFTER</a></li> <li><a href="http://craigslist.com/">HAIR TYPES</a></li> </ul> </li> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/about-cinderella-hair-extensions/about-us.php">ABOUT US</a> <ul> <li><a href="http://amazon.com/">WHY CHOOSE CINDERELLA</a></li> <li><a href="http://ebay.com/">TESTIMONIALS</a></li> <li><a href="http://craigslist.com/">MINI VIDEO CLIPS</a></li> <li><a href="http://craigslist.com/">OUR HAIR PRODUCTS</a></li> </ul> </li> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/news-and-offers/news.php">NEWS &amp; OFFERS</a> <ul> <li><a href="http://amazon.com/">VERA WANG FREE GIVEAWAY</a></li> <li><a href="http://ebay.com/">CINDERELLA ON TV</a></li> <li><a href="http://craigslist.com/">CINDERELLA IN THE PRESS</a></li> <li><a href="http://craigslist.com/">CINDRELLA NEWSLETTERS</a></li> </ul> </li> <li class="headlink"><a href="http://www.cinderellahair.co.uk/new/cinderella-salon/salon-finder.php">SALON FINDER</a></li> </ul> JS Code: $(document).ready(function(){ $('#cssdropdown li.headlink').hover( function() { $('ul', this).css('display', 'block'); }, function() { $('ul', this).css('display', 'none'); }); }); Full code is on the link above, just view source.

    Read the article

  • Wordpress Widget - Adding URL to title

    - by Nick Canarelli
    I can't seem to figure out how to wrap the title of the widget in an tag. For example, I am trying to get it so that when you type the url in a text field, it is then placed in the tag so that it is a hyperlink on the website... class Example_Widget extends WP_Widget { /** * Widget setup. */ function Example_Widget() { /* Widget settings. */ $widget_ops = array( 'classname' => 'example', 'description' => __('A widget that displays company announcements.', 'example') ); /* Widget control settings. */ $control_ops = array( 'width' => 300, 'height' => 350, 'id_base' => 'example-widget' ); /* Create the widget. */ $this->WP_Widget( 'example-widget', __('Announcement Widget', 'example'), $widget_ops, $control_ops ); } /** * How to display the widget on the screen. */ function widget( $args, $instance ) { extract( $args ); /* Our variables from the widget settings. */ $title = apply_filters('widget_title', $instance['title'] ); $excerpt = $instance['excerpt']; $url = $instance['url']; /* Before widget (defined by themes). */ echo $before_widget; /* Display the widget title if one was input (before and after defined by themes). */ if ( $title ) echo $before_title . $title . $after_title; /* Display name from widget settings if one was input. */ if ( $excerpt ) printf( '<p style="font-family: arial; font-size: 12px; line-height: 16px;">' . __('%1$s.', 'example') . '</p>', $excerpt ); /* After widget (defined by themes). */ echo $after_widget; } /** * Update the widget settings. */ function update( $new_instance, $old_instance ) { $instance = $old_instance; /* Strip tags for title and name to remove HTML (important for text inputs). */ $instance['title'] = strip_tags( $new_instance['title'] ); $instance['excerpt'] = strip_tags( $new_instance['excerpt'] ); return $instance; } /** * Displays the widget settings controls on the widget panel. * Make use of the get_field_id() and get_field_name() function * when creating your form elements. This handles the confusing stuff. */ function form( $instance ) { /* Set up some default widget settings. */ $defaults = array( 'title' => __('Title Goes Here', 'example'), 'excerpt' => __('Excerpt goes here.'), ); $instance = wp_parse_args( (array) $instance, $defaults ); ?> <!-- Widget Title: Text Input --> <p> <label for="<?php echo $this->get_field_id( 'title' ); ?>"><?php _e('Title:', 'hybrid'); ?></label> <input id="<?php echo $this->get_field_id( 'title' ); ?>" name="<?php echo $this->get_field_name( 'title' ); ?>" value="<?php echo $instance['title']; ?>" style="width:100%;" /> </p> <!-- Your Name: Text Input --> <p> <label for="<?php echo $this->get_field_id( 'excerpt' ); ?>"><?php _e('Excerpt:', 'example'); ?></label> <input id="<?php echo $this->get_field_id( 'excerpt' ); ?>" name="<?php echo $this->get_field_name( 'excerpt' ); ?>" value="<?php echo $instance['excerpt']; ?>" style="width:100%;" /> </p> <?php } } ?> And here is the functions file code register_sidebar(array( 'name' => __( 'Announcements' ), 'description' => __( 'Display company announcements here.' ), 'before_widget' => '', 'after_widget' => '<hr style="margin-top: 4px; color: #f00; background-color: #585040; height: 1px; border: none; margin-bottom: 2px;"/>', 'before_title' => '<h2 style="font-size: 12px;">', 'after_title' => '</h2>' ));

    Read the article

  • jqgrid sample using array data, what am I missing

    - by Dennis
    Hello. I'm new in jqgrid, I'm just trying thes example to work. I have a html file only, nothing more. When I ran this file, array data is not showing. What am I missing here? Thanks in advance. <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>jqGrid Demos</title> <link rel="stylesheet" type="text/css" media="screen" href="lib/jquery-ui-1.7.1.custom.css" /> <link rel="stylesheet" type="text/css" media="screen" href="lib/ui.jqgrid.css" /> <link rel="stylesheet" type="text/css" media="screen" href="lib/ui.multiselect.css" /> <style type="text/css"> html, body { margin: 0; /* Remove body margin/padding */ padding: 0; overflow: hidden; /* Remove scroll bars on browser window */ font-size: 75%; } /*Splitter style */ #LeftPane { /* optional, initial splitbar position */ overflow: auto; } /* * Right-side element of the splitter. */ #RightPane { padding: 2px; overflow: auto; } .ui-tabs-nav li {position: relative;} .ui-tabs-selected a span {padding-right: 10px;} .ui-tabs-close {display: none;position: absolute;top: 3px;right: 0px;z-index: 800;width: 16px;height: 14px;font-size: 10px; font-style: normal;cursor: pointer;} .ui-tabs-selected .ui-tabs-close {display: block;} .ui-layout-west .ui-jqgrid tr.jqgrow td { border-bottom: 0px none;} .ui-datepicker {z-index:1200;} </style> <script src="lib/jquery-1.4.2.js" type="text/javascript"></script> <script src="lib/jquery-ui-1.7.2.custom.min.js" type="text/javascript"></script> <script src="lib/jquery.layout.js" type="text/javascript"></script> <script src="lib/grid.locale-en.js" type="text/javascript"></script> <script src="lib/jquery.jqGrid.min.js" type="text/javascript"></script> <script src="lib/jquery.tablednd.js" type="text/javascript"></script> <script src="lib/jquery.contextmenu.js" type="text/javascript"></script> <script src="lib/ui.multiselect.js" type="text/javascript"></script> <script type="text/javascript"> // We use a document ready jquery function. jQuery(document).ready(function(){ jQuery("#list").jqGrid({ datatype: "local", height: 250, colNames:['Inv No','Date', 'Client', 'Amount','Tax','Total', 'Notes'], colModel:[ {name:'id',index:'id', width:60, sorttype:"int"}, {name:'invdate',index:'invdate', width:90, sorttype:"date"}, {name:'name',index:'name', width:100}, {name:'amount',index:'amount', width:80, align:"right",sorttype:"float"}, {name:'tax',index:'tax', width:80, align:"right",sorttype:"float"}, {name:'total',index:'total', width:80,align:"right",sorttype:"float"}, {name:'note',index:'note', width:150, sortable:false} ], pager: '#pager', rowNum:10, rowList:[10,20,30], sortname: 'id', sortorder: 'desc', viewrecords: true, multiselect: true, imgpath: "lib/basic/images", caption: "Manipulating Array Data" }); }); var mydata = [ {id:"1",invdate:"2007-10-01",name:"test",note:"note",amount:"200.00",tax:"10.00",total:"210.00"}, {id:"2",invdate:"2007-10-02",name:"test2",note:"note2",amount:"300.00",tax:"20.00",total:"320.00"}, {id:"3",invdate:"2007-09-01",name:"test3",note:"note3",amount:"400.00",tax:"30.00",total:"430.00"}, {id:"4",invdate:"2007-10-04",name:"test",note:"note",amount:"200.00",tax:"10.00",total:"210.00"}, {id:"5",invdate:"2007-10-05",name:"test2",note:"note2",amount:"300.00",tax:"20.00",total:"320.00"}, {id:"6",invdate:"2007-09-06",name:"test3",note:"note3",amount:"400.00",tax:"30.00",total:"430.00"}, {id:"7",invdate:"2007-10-04",name:"test",note:"note",amount:"200.00",tax:"10.00",total:"210.00"}, {id:"8",invdate:"2007-10-03",name:"test2",note:"note2",amount:"300.00",tax:"20.00",total:"320.00"}, {id:"9",invdate:"2007-09-01",name:"test3",note:"note3",amount:"400.00",tax:"30.00",total:"430.00"} ]; for(var i=0;i<=mydata.length;i++) jQuery("#list").jqGrid('addRowData',i + 1, mydata1[i]); </script> </head> <body> <table id="list" class="scroll"></table> <div id="pager" class="scroll" style="text-align:center;"></div> </body>

    Read the article

  • Uploadify Minimum Image Width And Height

    - by Richard Knop
    So I am using the Uplodify plugin to allow users to upload multiple images at once. The problem is I need to set a minimum width and height for images. Let's say 150x150px is the smallest image users can upload. How can I set this limitation in the Uploadify plugin? When user tries to upload smaller picture, I would like to display some error message as well. Here is the PHP file that is called bu the plugin to upload images: <?php define('BASE_PATH', substr(dirname(dirname(__FILE__)), 0, -22)); // set the include path set_include_path(BASE_PATH . '/../library' . PATH_SEPARATOR . BASE_PATH . '/library' . PATH_SEPARATOR . get_include_path()); // autoload classes from the library function __autoload($class) { include str_replace('_', '/', $class) . '.php'; } $configuration = new Zend_Config_Ini(BASE_PATH . '/application' . '/configs/application.ini', 'development'); $dbAdapter = Zend_Db::factory($configuration->database); Zend_Db_Table_Abstract::setDefaultAdapter($dbAdapter); function _getTable($table) { include BASE_PATH . '/application/modules/default/models/' . $table . '.php'; return new $table(); } $albums = _getTable('Albums'); $media = _getTable('Media'); if (false === empty($_FILES)) { $tempFile = $_FILES['Filedata']['tmp_name']; $extension = end(explode('.', $_FILES['Filedata']['name'])); // insert temporary row into the database $data = array(); $data['type'] = 'photo'; $data['type2'] = 'public'; $data['status'] = 'temporary'; $data['user_id'] = $_REQUEST['user_id']; $paths = $media->add($data, $extension, $dbAdapter); // save the photo move_uploaded_file($tempFile, BASE_PATH . '/public/' . $paths[0]); // create a thumbnail include BASE_PATH . '/library/My/PHPThumbnailer/ThumbLib.inc.php'; $thumb = PhpThumbFactory::create(BASE_PATH . '/public/' . $paths[0]); $thumb->adaptiveResize(85, 85); $thumb->save(BASE_PATH . '/public/' . $paths[1]); // add watermark to the bottom right corner $pathToFullImage = BASE_PATH . '/public/' . $paths[0]; $size = getimagesize($pathToFullImage); switch ($extension) { case 'gif': $im = imagecreatefromgif($pathToFullImage); break; case 'jpg': $im = imagecreatefromjpeg($pathToFullImage); break; case 'png': $im = imagecreatefrompng($pathToFullImage); break; } if (false !== $im) { $white = imagecolorallocate($im, 255, 255, 255); $font = BASE_PATH . '/public/fonts/arial.ttf'; imagefttext($im, 13, // font size 0, // angle $size[0] - 132, // x axis (top left is [0, 0]) $size[1] - 13, // y axis $white, $font, 'HunnyHive.com'); switch ($extension) { case 'gif': imagegif($im, $pathToFullImage); break; case 'jpg': imagejpeg($im, $pathToFullImage, 100); break; case 'png': imagepng($im, $pathToFullImage, 0); break; } imagedestroy($im); } echo "1"; } And here's the javascript: $(document).ready(function() { $('#photo').uploadify({ 'uploader' : '/flash-uploader/scripts/uploadify.swf', 'script' : '/flash-uploader/scripts/upload-public-photo.php', 'cancelImg' : '/flash-uploader/cancel.png', 'scriptData' : {'user_id' : 'USER_ID'}, 'queueID' : 'fileQueue', 'auto' : true, 'multi' : true, 'sizeLimit' : 2097152, 'fileExt' : '*.jpg;*.jpeg;*.gif;*.png', 'wmode' : 'transparent', 'onComplete' : function() { $.get('/my-account/temporary-public-photos', function(data) { $('#temporaryPhotos').html(data); }); } }); $('#upload_public_photo').hover(function() { var titles = '{'; $('.title').each(function() { var title = $(this).val(); if ('Title...' != title) { var id = $(this).attr('name'); id = id.substr(5); title = jQuery.trim(title); if (titles.length > 1) { titles += ','; } titles += '"' + id + '"' + ':"' + title + '"'; } }); titles += '}'; $('#titles').val(titles); }); }); Now bear in mind that I know how to check images dimensions in the PHP file. But I'm not sure how to modify the javascript so it won't upload images with very small dimensions.

    Read the article

  • jQuery ajax doesn't seem to be reading HTML data in Chromium

    - by Mahesh
    I have an HTML (App) file that reads another HTML (data) file via jQuery.ajax(). It then finds specific tags in the data HTML file and uses text within the tags to display sort-of tool tips. Here's the App HTML file: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Test</title> <style type="text/css"> <!--/* <![CDATA[ */ body { font-family : sans-serif; font-size : medium; margin-bottom : 5em; } a, a:hover, a:visited { text-decoration : none; color : #2222aa; } a:hover { background-color : #eeeeee; } #stat_preview { position : absolute; background : #ccc; border : thin solid #aaa; padding : 3px; font-family : monospace; height : 2.5em; } /* ]]> */--> </style> <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.min.js"></script> <script type="text/javascript"> //<![CDATA[ $(document).ready(function() { $("#stat_preview").hide(); $(".cfg_lnk").mouseover(function () { lnk = $(this); $.ajax({ url: lnk.attr("href"), success: function (data) { console.log (data); $("#stat_preview").html("A heading<br>") .append($(".tool_tip_text", $(data)).slice(0,3).text()) .css('left', (lnk.offset().left + lnk.width() + 30)) .css('top', (lnk.offset().top + (lnk.height()/2))) .show(); } }); }).mouseout (function () { $("#stat_preview").hide(); }); }); //]]> </script> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Test</h1> <ul> <li><a class="cfg_lnk" href="data.html">Sample data</a></li> </ul> <div id="stat_preview"></div> </body> </html> And here is the data HTML <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Test</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Test</h1> <table> <tr> <td class="tool_tip_text"> Some random value 1</td> <td class="tool_tip_text"> Some random value 2</td> <td class="tool_tip_text"> Some random value 3</td> <td class="tool_tip_text"> Some random value 4</td> <td class="tool_tip_text"> Some random value 5</td> </tr> <tr> <td class="tool_top_text"> Some random value 11</td> <td class="tool_top_text"> Some random value 21</td> <td class="tool_top_text"> Some random value 31</td> <td class="tool_top_text"> Some random value 41</td> <td class="tool_top_text"> Some random value 51</td> </tr> </table> </body> </html> This is working as intended in Firefox, but not in Chrome (Chromium 5.0.356.0). The console.log (data) displays empty string in Chromium's JavaScript console. Firebug in Firefox, however, displays the entire data HTML. Am I missing something? Any pointers?

    Read the article

  • Asp.Net Login control (Visual Web Dev)

    - by craig
    This is the code when you take the Login control from the toolbox. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Login ID="Login1" runat="server" onauthenticate="Login1_Authenticate" BackColor="#F7F7DE" BorderColor="#CCCC99" BorderStyle="Solid" BorderWidth="1px" Font-Names="Verdana" Font-Size="10pt"> <LayoutTemplate> <table border="0" cellpadding="1" cellspacing="0" style="border-collapse:collapse;"> <tr> <td> <table border="0" cellpadding="0"> <tr> <td align="center" colspan="2"> Log In</td> </tr> <tr> <td align="right"> <asp:Label ID="UserNameLabel" runat="server" AssociatedControlID="UserName">User Name:</asp:Label> </td> <td> <asp:TextBox ID="UserName" runat="server" ></asp:TextBox> <asp:RequiredFieldValidator ID="UserNameRequired" runat="server" ControlToValidate="UserName" ErrorMessage="User Name is required." ToolTip="User Name is required." ValidationGroup="Login1">*</asp:RequiredFieldValidator> </td> </tr> <tr> <td align="right"> <asp:Label ID="PasswordLabel" runat="server" AssociatedControlID="Password">Password:</asp:Label> </td> <td> <asp:TextBox ID="Password" runat="server" TextMode="Password"></asp:TextBox> <asp:RequiredFieldValidator ID="PasswordRequired" runat="server" ControlToValidate="Password" ErrorMessage="Password is required." ToolTip="Password is required." ValidationGroup="Login1">*</asp:RequiredFieldValidator> </td> </tr> <tr> <td colspan="2"> <asp:CheckBox ID="RememberMe" runat="server" Text="Remember me next time." /> </td> </tr> <tr> <td align="center" colspan="2" style="color:Red;"> <asp:Literal ID="FailureText" runat="server" EnableViewState="False"></asp:Literal> </td> </tr> <tr> <td align="right" colspan="2"> <asp:Button ID="LoginButton" runat="server" CommandName="Login" Text="Log In" ValidationGroup="Login1" onclick="LoginButton_Click" /> </td> </tr> </table> </td> </tr> </table> </LayoutTemplate> <TitleTextStyle BackColor="#6B696B" Font-Bold="True" ForeColor="#FFFFFF" /> </asp:Login> </div> </form> </body> </html> Part of my aspx.cs protected void LoginButton_Click(object sender, EventArgs e) { String sUserName = UserName.Text; String sPassword = Password.Text; Error 1 The name 'UserName' does not exist in the current context Error 2 The name 'Password' does not exist in the current context Error 3 'ASP.default_aspx' does not contain a definition for 'Login1_Authenticate' and no extension method 'Login1_Authenticate' accepting a first argument of type 'ASP.default_aspx' could be found (are you missing a using directive or an assembly reference?) What am I doing wrong?

    Read the article

  • How to refresh DataGrid and DropDown on main page after hiding modal popup

    - by James
    Hi, I am adding records to a database from a modal popup. After hiding the modal popup, the page has not been refreshed even though I have Rebound the controls. I have reviewed a few postings on the web about this but the solution still evades me. I have attached my code after removing some of the extra detail... It seems I need to cause a postback but I don't know what needs to be changed. Some posts have talked about the extender being misplaced. Anyway, thank you James <asp:Content ID="Content1" ContentPlaceHolderID="Head" Runat="Server"> <div class="divBorder"> <asp:DataGrid id="dgrSessionFolders" runat="server" BorderWidth="2px" BorderStyle="Solid" BorderColor="#C0C0FF" Font-Names="Arial" Font-Bold="True" Font-Size="8pt" GridLines="Horizontal" AutoGenerateColumns="False" PageSize="9999" AllowPaging="False" OnItemCommand="dgrSessionFolders_Command" OnItemDataBound="CheckSessionFolderStatus" HorizontalAlign="Left" ForeColor="Blue" ShowFooter="True" CellPadding="2" OnSortCommand="dgrSessionFolders_Sort" AllowSorting="True"> </asp:DataGrid> </div> &nbsp;&nbsp;&nbsp; <asp:Label ID="Errormsg" runat="server" ForeColor="#CC0000"></asp:Label> <asp:UpdatePanel ID="UpdatePanel1" runat="server" RenderMode="Inline" ChildrenAsTriggers="false" UpdateMode="Conditional"> <Triggers> <asp:AsyncPostBackTrigger ControlID="btnEditTopic" /> <asp:AsyncPostBackTrigger ControlID="btnAdd" /> <asp:AsyncPostBackTrigger ControlID="btnUpdate" /> <asp:AsyncPostBackTrigger ControlID="btnDelete" /> <asp:AsyncPostBackTrigger ControlID="btnClear" /> <asp:AsyncPostBackTrigger ControlID="btnAddTopic" /> <asp:AsyncPostBackTrigger ControlID="btnUpdateTopic" /> <asp:AsyncPostBackTrigger ControlID="btnDeleteTopic" /> </Triggers> <ContentTemplate> <asp:panel id="pnl" runat="server" HorizontalAlign="Center" Height="48px" Width="100%" > &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <asp:ImageButton ID="btnEditTopic" runat="server" AlternateText="Edit Topic" ImageUrl="~/App_Themes/Common/images/BtnEditTopic.jpg" Height="28px"> </asp:ImageButton> <cc1:ModalPopupExtender ID="btnEditTopic_ModalPopupExtender" runat="server" BackgroundCssClass="modalBackground" DropShadow="true" Enabled="true" PopupControlID="pnlEditTopic" TargetControlID="btnEditTopicHidden" CancelControlID="btnEditTopicClose"> </cc1:ModalPopupExtender> <asp:ImageButton ID="btnAdd" runat="server" AlternateText="Add Folder" ImageUrl="~/App_Themes/Common/images/BtnAddFolder.jpg" Height="28px"> </asp:ImageButton> <asp:ImageButton ID="btnUpdate" runat="server" AlternateText="Update Folder" ImageUrl="~/App_Themes/Common/images/BtnUpdateFolder.jpg" Height="28px"> </asp:ImageButton> <asp:ImageButton ID="btnDelete" runat="server" AlternateText="Delete Folder" ImageUrl="~/App_Themes/Common/images/BtnDeleteFolder.jpg" Height="28px"> </asp:ImageButton> <asp:ImageButton ID="BtnClear" runat="server" AlternateText="Clear Screen Input Fields" ImageUrl="~/App_Themes/Common/images/BtnAddMode.jpg" Height="28px"> </asp:ImageButton> <asp:Button ID="btnEditTopicHidden" runat="server" Enabled="false" Text="" Style="visibility: hidden" /> </asp:panel> <asp:Panel ID="pnlEditTopic" runat="server" CssClass="modalPopupEditTopic" Style="display: none;" > <table cellspacing="0" class="borderTable0" width="100%" style=""> <tr> <td colspan="10" class="Subhdr" align="center" style="width:100%"> <asp:label id="lblTopicScreenHdr" Cssclass="ScreenHdr" runat="server">Topic Maintenance</asp:label> </td> </tr> <tr> <td colspan="6"> <asp:Label ID="TopicPopErrorMsg" runat="server" ForeColor="#CC0000">&nbsp;</asp:Label> </td> </tr> <tr style="height:4px"> <td colspan="6" align="center"> <asp:ImageButton ID="btnAddTopic" runat="server" AlternateText="Add Topic" ImageUrl="~/App_Themes/Common/images/BtnApply.jpg" Height="28px"> </asp:ImageButton> <asp:ImageButton ID="btnUpdateTopic" runat="server" AlternateText="Update Topic" ImageUrl="~/App_Themes/Common/images/BtnApply.jpg" Height="28px"> </asp:ImageButton> <asp:ImageButton ID="btnDeleteTopic" runat="server" AlternateText="Delete Topic" ImageUrl="~/App_Themes/Common/images/BtnDelete.jpg" Height="28px"> </asp:ImageButton> <asp:ImageButton ID="btnEditTopicClose" runat="server" AlternateText="Close Edit Topic Popup" ImageUrl="~/App_Themes/Common/images/BtnCancel.jpg" Height="28px"> </asp:ImageButton> </td> </tr> </table> </asp:Panel> </ContentTemplate> </asp:UpdatePanel> Private Sub btnAddTopic_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnAddTopic.Click 'Add the Topic table entry AddTopic() 'Display an informational message Errormsg.Text = "The Topic has been successfully added, thank you! " Errormsg.ForeColor = Drawing.Color.Blue 'Rebind the Topic Drop Down and set to added Topic ddlSessionTopic.DataBind() ddlSessionTopic.SelectedValue = drTopic("TOPC_ID") 'Rebind the Session Folders grid RebindGrid() 'Hide the Topic Popup btnEditTopic_ModalPopupExtender.Hide() End Sub Private Sub RebindGrid() cnnSQL = New SqlConnection(strConnection) cmdSQL = New SqlCommand("GetSessionFoldersForGrid", cnnSQL) cmdSQL.CommandType = CommandType.StoredProcedure cmdSQL.Parameters.Clear() cnnSQL.Open() dadSQL = New SqlDataAdapter(cmdSQL) dadSQL.SelectCommand = cmdSQL dadSQL.Fill(dtSessionFolderGrid) cnnSQL.Close() dvSessionFolderGrid = dtSessionFolderGrid.DefaultView dvSessionFolderGrid.Sort = String.Format("{0} {1}{2}", so.Sortfield, so.SortDirection, so.SortSuffix) dgrSessionFolders.DataSource = dvSessionFolderGrid dgrSessionFolders.DataBind() End Sub

    Read the article

  • Javascript stockticker : not showing data on php page

    - by developer
    iam not getting any javascript errors , code is getting rendered properly only, but still server not displaying data on the page. please check the code below . <style type="text/css"> #marqueeborder { color: #cccccc; background-color: #EEF3E2; font-family:"Lucida Console", Monaco, monospace; position:relative; height:20px; overflow:hidden; font-size: 0.7em; } #marqueecontent { position:absolute; left:0px; line-height:20px; white-space:nowrap; } .stockbox { margin:0 10px; } .stockbox a { color: #cccccc; text-decoration : underline; } </style> </head> <body> <div id="marqueeborder" onmouseover="pxptick=0" onmouseout="pxptick=scrollspeed"> <div id="marqueecontent"> <?php // Original script by Walter Heitman Jr, first published on http://techblog.shanock.com // List your stocks here, separated by commas, no spaces, in the order you want them displayed: $stocks = "idt,iye,mill,pwer,spy,f,msft,x,sbux,sne,ge,dow,t"; // Function to copy a stock quote CSV from Yahoo to the local cache. CSV contains symbol, price, and change function upsfile($stock) { copy("http://finance.yahoo.com/d/quotes.csv?s=$stock&f=sl1c1&e=.csv","stockcache/".$stock.".csv"); } foreach ( explode(",", $stocks) as $stock ) { // Where the stock quote info file should be... $local_file = "stockcache/".$stock.".csv"; // ...if it exists. If not, download it. if (!file_exists($local_file)) { upsfile($stock); } // Else,If it's out-of-date by 15 mins (900 seconds) or more, update it. elseif (filemtime($local_file) <= (time() - 900)) { upsfile($stock); } // Open the file, load our values into an array... $local_file = fopen ("stockcache/".$stock.".csv","r"); $stock_info = fgetcsv ($local_file, 1000, ","); // ...format, and output them. I made the symbols into links to Yahoo's stock pages. echo "<span class=\"stockbox\"><a href=\"http://finance.yahoo.com/q?s=".$stock_info[0]."\">".$stock_info[0]."</a> ".sprintf("%.2f",$stock_info[1])." <span style=\""; // Green prices for up, red for down if ($stock_info[2]>=0) { echo "color: #009900;\">&uarr;"; } elseif ($stock_info[2]<0) { echo "color: #ff0000;\">&darr;"; } echo sprintf("%.2f",abs($stock_info[2]))."</span></span>\n"; // Done! fclose($local_file); } ?> <span class="stockbox" style="font-size:0.6em">Quotes from <a href="http://finance.yahoo.com/">Yahoo Finance</a></span> </div> </div> </body> <script type="text/javascript"> // Original script by Walter Heitman Jr, first published on http://techblog.shanock.com // Set an initial scroll speed. This equates to the number of pixels shifted per tick var scrollspeed=2; var pxptick=scrollspeed; var marqueediv=''; var contentwidth=""; var marqueewidth = ""; function startmarquee(){ alert("hi"); // Make a shortcut referencing our div with the content we want to scroll marqueediv=document.getElementById("marqueecontent"); //alert("marqueediv"+marqueediv); alert("hi"+marqueediv.innerHTML); // Get the total width of our available scroll area marqueewidth=document.getElementById("marqueeborder").offsetWidth; alert("marqueewidth"+marqueewidth); // Get the width of the content we want to scroll contentwidth=marqueediv.offsetWidth; alert("contentwidth"+contentwidth); // Start the ticker at 50 milliseconds per tick, adjust this to suit your preferences // Be warned, setting this lower has heavy impact on client-side CPU usage. Be gentle. var lefttime=setInterval("scrollmarquee()",50); alert("lefttime"+lefttime); } function scrollmarquee(){ // Check position of the div, then shift it left by the set amount of pixels. if (parseInt(marqueediv.style.left)>(contentwidth*(-1))) marqueediv.style.left=parseInt(marqueediv.style.left)-pxptick+"px"; //alert("hikkk"+marqueediv.innerHTML);} // If it's at the end, move it back to the right. else{ alert("marqueewidth"+marqueewidth); marqueediv.style.left=parseInt(marqueewidth)+"px"; } } window.onload=startmarquee; </script> </html> Below is the server displayed page. I have updated with screenshot with your suggestion, i made change in html too, to check what is showing by child dev

    Read the article

  • Fix a box 250px from top of content with wrapping content

    - by Matt
    I'm having trouble left aligning a related links div inside a block of text, exactly 250 pixels from the top of a content area, while retaining word wrapping. I attempted to do this with absolute positioning, but the text in the content area doesn't wrap around the content. I would just fix the related links div in the content, however, this will display on an article page, so I would like for it to be done without placing it in a specific location in the content. Is this possible? If so, can someone help me out with the CSS for this? Example image of desired look & feel... UPDATE: For simplicity, I've added example code. You can view this here: http://www.focusontheclouds.com/files/example.html. Example HTML: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Example Page</title> <style> body { width: 400px; font-family: Arial, sans-serif; } h1 { font-family: Georgia, serif; font-weight: normal; } .relatedLinks { position: relative; width: 150px; text-align: center; background: #f00; height: 300px; float: left; margin: 0 10px 10px 0; } </style> </head> <body> <div class="relatedLinks"><h1>Related Links</h1></div> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc tempus est luctus ante auctor et ullamcorper metus ullamcorper. Vestibulum molestie, lectus sed luctus egestas, dolor ipsum aliquet orci, ac bibendum quam elit blandit nulla.</p> <p>In sit amet sagittis urna. In fermentum enim et lectus consequat a congue elit porta. Pellentesque nisl quam, elementum vitae elementum et, facilisis quis velit. Nam odio neque, viverra in consectetur at, mollis eu mi. Etiam tempor odio vitae ligula ultrices mollis. </p> <p>Donec eget ligula id augue pulvinar lobortis. Mauris tincidunt suscipit felis, eget eleifend lectus molestie in. Donec et massa arcu. Aenean eleifend nulla at odio adipiscing quis interdum arcu dictum. Fusce tellus dolor, tempor ut blandit a, dapibus ac ante. Nulla eget ligula at turpis consequat accumsan egestas nec purus. Nullam sit amet turpis ac lacus tincidunt hendrerit. Nulla iaculis mauris sed enim ornare molestie. </p> <p>Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Maecenas non purus diam. Suspendisse iaculis tincidunt tempor. Suspendisse ut pretium lectus. Maecenas id est dui.</p> <p>Nunc pretium ipsum id libero rhoncus varius. Duis imperdiet elit ut turpis porta pharetra. Nulla vel dui vitae ipsum sollicitudin varius. Duis sagittis elit felis, quis interdum odio. </p> <p>Morbi imperdiet volutpat sodales. Aenean non euismod est. Cras ultricies felis non tortor congue ultrices. Proin quis enim arcu. Cras mattis sagittis erat, elementum bibendum ipsum imperdiet eu. Morbi fringilla ullamcorper elementum. Vestibulum semper dui non elit luctus quis accumsan ante scelerisque.</p> </body> </html>

    Read the article

  • navbar hover issue in ie7

    - by Joel
    I'm having a problem with a child list not hovering correctly in IE7. Other browsers and IE7 seem to work fine. Here is the site: http://rattletree.com/index_1.php If you hover over the nav bars you'll see the sub-list come into view. You can see that the arrow image is not below the navbar in IE7 only. html: <div id="navbar2"> <ul id="navbar"> <li id="index"><a href="index.php">About Rattletree</a></li> <li id="upcomingshows"><a href="upcomingshows.php">Calendar</a></li> <li id="booking"><a href="booking.php">Contact</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png"</img><a href="#">Booking Information</a></li> <li class="innerlist"><a href="#">Press</a></li> </ul> </li> <li id="instruments"><a href="instruments.php">The Band</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png"</img><a href="#">The Instruments</a></li> <li class="innerlist"><a href="#">The Players</a></li> </ul> </li> <li id="classes"><a href="classes.php">Sights &amp; Sounds</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png"</img><a href="#">Listen</a></li> <li class="innerlist"><a href="#">Photos</a></li> <li class="innerlist"><a href="#">Video</a></li> </ul> </li> <li id"classes"><a href="classes.php">Workshops &amp; Classes</a></li> </ul> </div> and css: /* OUTER LIST STYLING */ div#navbar2 { position:relative; width: 100%; border-top: solid #000 1px; border-bottom: solid #546F8B 1px; background-color: #546F8B; } div#navbar2 ul#navbar { padding: 0px; margin: 10px 0; font-family: Arial, Helvetica, sans-serif; font-size: 16px; letter-spacing:1px; color: #FFF; white-space: nowrap; display:block; } div#navbar2 ul#navbar li { position:relative; margin: 0px; padding:0px; list-style-type: none; display:inline; } div#navbar2 li a { text-decoration: none; color: #fff; margin:0; padding: 11px 12px; } div#navbar2 li a:link { color: #FFF: } div#navbar2 li a:visited { color: #ffffff; } div#navbar2 li a:hover { color: #000; background-color: #FDFFC9; } /* INNER LIST STYLING */ div#navbar2 ul#navbar li ul.innerlist{ display: none; color:#000; } div#navbar2 ul#navbar li ul.innerlist li{ color:#000; } div#navbar2 ul#navbar li:hover ul.innerlist { position: absolute; display: inline; left: 0; width: 100%; margin: 30px 0 0px 0px; padding: 0; color:#000; } div#navbar2 ul#navbar li.innerlist a { text-decoration: none; font-weight:bold; color: #000; padding: 10px 15px 20px 15px; margin:0; } div#navbar2 li.innerlist a:link { color: #000: } div#navbar2 li.innerlist a:visited { color: #000; } div#navbar2 ul#navbar li.innerlist a:hover { color: #e62d31; background-color:transparent; } img.arrowAdjust{ padding:0px 0 0 20px; margin:0; }

    Read the article

  • navbar hover issue in ie8

    - by Joel
    I'm having a problem with a child list not hovering correctly in IE8. Other browsers and IE7 seem to work fine. Here is the site: http://rattletree.com/index_1.php If you hover over the nav bars you'll see the sub-list come into view. You can see that the arrow image is not below the navbar in IE8 only. html: <div id="navbar2"> <ul id="navbar"> <li id="index"><a href="index.php">About Rattletree</a></li> <li id="upcomingshows"><a href="upcomingshows.php">Calendar</a></li> <li id="booking"><a href="booking.php">Contact</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png"</img><a href="#">Booking Information</a></li> <li class="innerlist"><a href="#">Press</a></li> </ul> </li> <li id="instruments"><a href="instruments.php">The Band</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png"</img><a href="#">The Instruments</a></li> <li class="innerlist"><a href="#">The Players</a></li> </ul> </li> <li id="classes"><a href="classes.php">Sights &amp; Sounds</a> <ul class="innerlist"> <li class="innerlist"><img class="arrowAdjust" src="images/curved_arrow.png"</img><a href="#">Listen</a></li> <li class="innerlist"><a href="#">Photos</a></li> <li class="innerlist"><a href="#">Video</a></li> </ul> </li> <li id"classes"><a href="classes.php">Workshops &amp; Classes</a></li> </ul> </div> and css: /* OUTER LIST STYLING */ div#navbar2 { position:relative; width: 100%; border-top: solid #000 1px; border-bottom: solid #546F8B 1px; background-color: #546F8B; } div#navbar2 ul#navbar { padding: 0px; margin: 10px 0; font-family: Arial, Helvetica, sans-serif; font-size: 16px; letter-spacing:1px; color: #FFF; white-space: nowrap; display:block; } div#navbar2 ul#navbar li { position:relative; margin: 0px; padding:0px; list-style-type: none; display:inline; } div#navbar2 li a { text-decoration: none; color: #fff; margin:0; padding: 11px 12px; } div#navbar2 li a:link { color: #FFF: } div#navbar2 li a:visited { color: #ffffff; } div#navbar2 li a:hover { color: #000; background-color: #FDFFC9; } /* INNER LIST STYLING */ div#navbar2 ul#navbar li ul.innerlist{ display: none; color:#000; } div#navbar2 ul#navbar li ul.innerlist li{ color:#000; } div#navbar2 ul#navbar li:hover ul.innerlist { position: absolute; display: inline; left: 0; width: 100%; margin: 30px 0 0px 0px; padding: 0; color:#000; } div#navbar2 ul#navbar li.innerlist a { text-decoration: none; font-weight:bold; color: #000; padding: 10px 15px 20px 15px; margin:0; } div#navbar2 li.innerlist a:link { color: #000: } div#navbar2 li.innerlist a:visited { color: #000; } div#navbar2 ul#navbar li.innerlist a:hover { color: #e62d31; background-color:transparent; } img.arrowAdjust{ padding:0px 0 0 20px; margin:0; }

    Read the article

  • Problems with a from CSS

    - by Michael
    I am trying to create a fairly basic form with in my maincontent. I am sure I am coding things incorrectly and it is driving me crazy. Note my code. I get extremely wide vertical spacing in IE 7 and the bacground color between the field sets does not work correctly. All is good in FF. My CSS is: fieldset { margin: 1.5em 0 0 0; padding: 0; border-style: none; border-top: 1px solid #BFBAB0; background-color: #FFFFFF; } legend { margin-left: 1em; color: #000000; font-weight: bold; } fieldset ol { padding: 1em 1em 0 1em; list-style: none; } fieldset li { padding-bottom: 1em; } fieldset.submit { border-style: none; } { var w = document.myform.mylist.selectedIndex; var selected_text = document.myform.mylist.options[w].text; alert(selected_text); } label em { display: block; color: #900; font-size: 85%; font-style: normal; text-transform: uppercase; } This is my html code. <div id="mainContent1"> <form name="myform"> <label for="mylist"><strong>Select an Account Type:</strong></label> <select name="mylist"><option value="traditional">Traditional Account</option> <option value="paperless">Paperless Account</option> </select> </form> <br /><a> </a> <form action="example.php"> <fieldset> <legend>Contact Details</legend> <ol> <li> <label for="name">Name:</label> <input id="name" name="name" class="text" type="text" /> <label for="name"> <em>required</em> </label> </li> <li> <label for="email">Email address:</label> <input id="email" name="email" class="text" type="text" /> <label for="name"> <em>required</em> </li> <li> <label for="phone">Telephone:</label> <input id="phone" name="phone" class="text" type="text" /> <label for="name"> <em>required</em> <ol> <li> <input id="option1" name="option1" class="checkbox" type="checkbox" value="1" /> <label for="option1">Savings</label> </li> <li> <input id="option2" name="option2" class="checkbox" type="checkbox" value="1" /> <label for="option2">Checkings</label> </li> </ol> </fieldset> <fieldset> <legend>Delivery Address</legend> <ol> <li> <label for="address1">Address 1:</label> <input id="address1" name="address1" class="text" type="text" /> </li> <li> <label for="city">City:</label> <input id="city" name="city" class="text" type="text" /> </li> <li> <label for="postcode">Zip Code:</label> <input id="postcode" name="postcode" class="text textSmall" type="text" /> </li> <li> <label for="country">Country:</label> <input id="country" name="country" class="text" type="text" /> </li> </ol> </fieldset> <fieldset class="submit"> <input class="submit" type="submit" value="Submit" /> </fieldset> <fieldset class="clear"> <input class="clear" type="clear" value="Submit" /> </fieldset> </form>

    Read the article

  • Odd problem with IE8 and z-index CSS property

    - by DK39
    I not been able to put one DIV over his parent DIV in Internet Explorer. With Firefox is working as suposed to. The odd part is that if I open the html file directly in IE, everything works fine. But if I upload to the server and open from there, the div is hidden underneath his parent. I've tried several z-index combinations and none works. Here's the code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>Test</title> <meta http-equiv="content-type" content="text-html; charset=utf-8" /> <style type="text/css"> .col { float:left; width:310px; margin-right:13px; } .art { position:relative; border-bottom: 1px solid #d0d0d0; font: normal normal bold 11px Arial,Verdana,Helvetica; color:#A0A0A0; width:310px; height:50px; top:0px; left: 0px; margin-right:10px; background-color:#F0F0F0; } .art a { padding:3px; display:block; width:304px; height:100%; color:#707070; } .art a:visited { color:#A0A0A0; } .art a:hover { background-color:#E0E0E0; } .box { z-index:1000; background-color:#A0A0A0; color:#404040; font: normal normal bold 11px Arial,Verdana,Helvetica; display:none; position:absolute; top:30px; left:10px; text-align:left; border:3px solid #707070; margin:5px 0px 5px 5px; font-size:10px; color:White; width:100%; } </style> <script type="text/javascript"> function sh(obj) { var el = document.getElementById(obj); if ( el.style.display != 'block' ) { el.style.display = 'block'; } else { el.style.display = 'none'; } } </script> </head> <body> <div class="col"> <div class="art"> <a href="" target="_blank" onmouseover="javascript:sh('i0')" onmouseout="javascript:sh('i0')">Title 1</a> <div id="i0" class="box"> <div class="text"> Les "chemises rouges" manifestent depuis la mi-mars pour faire tomber le gouvernement et occupent depuis trois semaines un quartier touristique et commerçant autour duquel ils ont érigé des barricades. </div> </div> </div> <div class="art"> <a href="" target="_blank" onmouseover="javascript:sh('i1')" onmouseout="javascript:sh('i1')">Title2</a> <div id="i1" class="box"> <div class="text"> Une association ardéchoise accueillant des séminaires de "bien-être" et de "développement personnel" a refusé d'accueillir un stage de danse en invoquant l'homosexualité des participants, ont indiqué aujourd'hui les organisateurs. </div> </div> </div> </div> </body> </html> What's is going on here?

    Read the article

  • CSS issue with margin: auto

    - by user1702273
    Hi am having an issue with the margin auto of my website where i have a wrapper div with the width set to 1000px and the margins top and bottom to 0 and left and right to auto. I have a navigation menu in the side bar, where i used java script to replace the same div with different tables. when i click a link in the menu the wrapper shifts right some px and the comes to original, I don't want that action i want the wrapper to be static and not to vary at any time. how can i achieve that. when i set the margin to just 0, so problem with positioning. But i want the wrapper to be centered. Here is my css code: body { background-color:#E2E3E4; color:#333; margin:0; padding:0; font-size: 12px; } #wrapper { width:1000px; margin:0 auto; margin-bottom:10px; } #header1 { width:1000px; height:44px; margin:0 auto; background-color:#ED6B06; } #header2 { width:1000px; height:40px; margin:0 auto; border-bottom:1px solid #EDE9DE; } #header3 { width:1000px; height:40px; margin:0 auto; border-bottom:1px solid #EDE9DE; } #header2 p { margin:0 auto; font-size:20pt; color: #364395; font-smooth: auto; margin-left:15px; margin-top:5px; } #welcome { width:600px; float:left; padding:10px; margin:0 auto; } #status{ margin:0 auto; width:50px; float:right; padding:10px; margin-top:3px; margin-right:15px; } #content { width:780px; float:right; } #sidebar { width:150px; margin-top:15px; margin-left:10px; float:left; border-right:1px solid #EDE9DE; margin-bottom:25px; } #footer { clear:both; margin:0 auto; width:1000px; height:44px; border-top:1px solid #EDE9DE; } HTML: <html> <head> <link rel="stylesheet" type="text/css" href="style/style.css" media="screen" /> <title>Pearson Schools Management Portal</title> </head> <body id="home"> <div id="wrapper"> <?php include('includes/header1.php'); ?> <?php include('includes/header2.php'); ?> <?php include('includes/header3.php'); ?> <div id="content"> <h2>Welcome to Portal!</h2> </div> <!-- end #content --> <?php include('includes/sidebar.php'); ?> <?php include('includes/footer.php'); ?> </div> <!-- End #wrapper --> <link rel="stylesheet" type="text/css" media="screen" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/base/jquery-ui.css"> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://jzaefferer.github.com/jquery-validation/jquery.validate.js"></script> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script> <?php include('scripts/index_data.js'); ?> </body>

    Read the article

  • CSS: Centering a floated block level element in IE6 (It almost works)

    - by Louis W
    I have a block level element which I am centering on the page. I have gotten it to work for all other browsers except IE6 where it ALMOST works. http://tinyurl.com/28sh9eq If I view the page in IE6 the red box is slightly off center of the pink one in IE. If I then resize the browser window it snaps into place where I want it. Uhhhhh.... yea.... what gives? How come resizing the window makes it work? I have also tried setting an explicit width on the wrapper with no avail. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"> <html> <head> <title></title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=7" /> <style type="text/css"> BODY { text-align: center; font-family: Arial; } .row_wrap { height: 100px; margin-bottom: 30px; background-color: pink; } .row { float: right; position: relative; left: -50%; text-align: left; clear: both; } .button1 { color: #FFF; height: 36px; text-decoration: none; position: relative; padding: 0 30px; background: url('button.gif') no-repeat 0 0; display: block; float: left; left: 50%; } .button1 .end { width: 20px; height: 37px; position: absolute; right: -2px; top: 0; background: url('button.gif') no-repeat right 0; } .button1 .text { font-size: 16px; font-weight: bold; white-space: nowrap; height: 36px; padding-top: 7px; display: block; float: left; } .button1 .text .arrow { vertical-align: 1px; } </style> </head> <body> <h2>RTL: Button 1</h2> <div class="row_wrap"> <div class="row" dir="rtl"> <a href="#" class="button1"> <span class="end"></span> <span class="text"><span class="arrow">»</span> Hello 1.</span> </a> </div> </div> <h2>RTL: Button 1-2</h2> <div class="row_wrap" style="width: 400px;"> <div class="row" dir="rtl"> <a href="#" class="button1"> <span class="end"></span> <span class="text"><span class="arrow">»</span> Hello 1.</span> </a> </div> </div> <br/><br/> <h2>Normal: Button 1</h2> <div class="row_wrap"> <div class="row"> <a href="#" class="button1"> <span class="end"></span> <span class="text"><span class="arrow">»</span> Hello.</span> </a> </div> </div> </body> Thanks for your help.

    Read the article

  • Unable to center text in IE but works in firefox

    - by greenpool
    Can somebody point out where I'm going wrong with the following code. Text inside td elements need to be centered except for Summary and Experience. This only appears to work in Firefox/chrome. In IE8 all td text are displayed as left-justified. No matter what I try it doesn't center it. Any particular reason why this would happen? Thanks. css #viewAll { font-family:"Trebuchet MS", Arial, Helvetica, sans-serif; width:100%; border-collapse:collapse; margin-left:10px; table-layout: fixed; } #viewAll td, #viewAll th { font-size:1.1em; border:1px solid #98bf21; word-wrap:break-word; text-align:center; overflow:hidden; } #viewAll tbody td{ padding:2px; } #viewAll th { font-size:1.1em; padding-top:5px; padding-bottom:4px; background-color:#A7C942; color:#ffffff; } table <?php echo '<table id="viewAll" class="tablesorter">'; echo '<thead>'; echo '<tr align="center">'; echo '<th style="width:70px;">Product</th>'; echo '<th style="width:105px;">Prob</th>'; echo '<th style="width:105px;">I</th>'; echo '<th style="width:60px;">Status</th>'; echo '<th style="width:120px;">Experience</th>'; echo '<th style="width:200px;">Technical Summary</th>'; echo '<th style="width:80px;">Record Created</th>'; echo '<th style="width:80px;">Record Updated</th>'; echo '<th style="width:50px;">Open</th>'; echo '</tr>'; echo '</thead>'; echo '<tbody>'; while ($data=mysqli_fetch_array($result)){ #limiting the summary text displayed in the table $limited_summary = (strlen($data['summary']) > 300) ? substr(($data['summary']),0,300) . '...' : $data['summary']; $limited_exp = (strlen($data['exp']) > 300) ? substr(($data['exp']),0,300) . '...' : $data['exp']; echo '<tr align="center"> <td style="width:70px; text-align:center;">'.$data['product'].'</td>'; //if value is '-' do not display as link if ($data['prob'] != '-'){ echo '<td style="width:105px;">'.$data['prob'].'</a></td>'; } else{ echo '<td style="width:105px; ">'.$data['prob'].'</td>'; } if ($data['i'] != '-'){ echo '<td style="width:105px; ">'.$data['i'].'</a></td>'; } else{ echo '<td style="width:105px; ">'.$data['i'].'</td>'; } echo'<td style="width:40px; " >'.$data['status'].'</td> <td style="width:120px; text-align:left;">'.$limited_cust_exp.'</td> <td style="width:200px; text-align:left;">'.$limited_summary.'</td> <td style="width:80px; ">'.$data['created'].'</td> <td style="width:80px; ">'.$data['updated'].'</td>'; if (isset($_SESSION['username'])){ echo '<td style="width:50px; "> <form action="displayRecord.php" method="get">'.' <input type="hidden" name="id" value="'. $data['id'].'" style="text-decoration: none" /><input type="submit" value="Open" /></form></td>'; }else{ echo '<td style="width:50px; "> <form action="displayRecord.php" method="get">'.' <input type="hidden" name="id" value="'. $data['id'].'" style="text-decoration: none" /><input type="submit" value="View" /></form></td>'; } echo '</tr>'; }#end of while echo '</tbody>'; echo '</table>'; ?>

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Insert Record by Drag & Drop from ADF Tree to ADF Tree Table

    - by arul.wilson(at)oracle.com
    If you want to create record based on the values Dragged from ADF Tree and Dropped on a ADF Tree Table, then here you go.UseCase DescriptionUser Drags a tree node from ADF Tree and Drops it on a ADF Tree Table node. A new row gets added in the Tree Table based on the source tree node, subsequently a record gets added to the database table on which Tree table in based on.Following description helps to achieve this using ADF BC.Run the DragDropSchema.sql to create required tables.Create Business Components from tables (PRODUCTS, COMPONENTS, SUB_COMPONENTS, USERS, USER_COMPONENTS) created above.Add custom method to App Module Impl, this method will be used to insert record from view layer.   public String createUserComponents(String p_bugdbId, String p_productId, String p_componentId, String p_subComponentId){    Row newUserComponentsRow = this.getUserComponentsView1().createRow();    try {      newUserComponentsRow.setAttribute("Bugdbid", p_bugdbId);      newUserComponentsRow.setAttribute("ProductId", new oracle.jbo.domain.Number(p_productId));      newUserComponentsRow.setAttribute("Component1", p_componentId);      newUserComponentsRow.setAttribute("SubComponent", p_subComponentId);    } catch (Exception e) {        e.printStackTrace();        return "Failure";    }        return "Success";  }Expose this method to client interface.To display the root node we need a custom VO which can be achieved using below query. SELECT Users.ACTIVE, Users.BUGDB_ID, Users.EMAIL, Users.FIRSTNAME, Users.GLOBAL_ID, Users.LASTNAME, Users.MANAGER_ID, Users.MANAGER_PRIVILEGEFROM USERS UsersWHERE Users.MANAGER_ID is NULLCreate VL between UsersView and UsersRootNodeView VOs.Drop ProductsView from DC as ADF Tree to jspx page.Add Tree Level Rule based on ComponentsView and SubComponentsView.Drop UsersRootNodeView as ADF Tree TableAdd Tree Level Rules based on UserComponentsView and UsersView.Add DragSource to ADF Tree and CollectionDropTarget to ADF Tree Table respectively.Bind CollectionDropTarget's DropTarget to backing bean and implement method of signature DnDAction (DropEvent), this method gets invoked when Tree Table encounters a drop action, here details required for creating new record are captured from the drag source and passed to 'createUserComponents' method. public DnDAction onTreeDrop(DropEvent dropEvent) {      String newBugdbId = "";      String msgtxt="";            try {          // Getting the target node bugdb id          Object serverRowKey = dropEvent.getDropSite();          if (serverRowKey != null) {                  //Code for Tree Table as target              String dropcomponent = dropEvent.getDropComponent().toString();              dropcomponent = (String)dropcomponent.subSequence(0, dropcomponent.indexOf("["));              if (dropcomponent.equals("RichTreeTable")){                RichTreeTable richTreeTable = (RichTreeTable)dropEvent.getDropComponent();                richTreeTable.setRowKey(serverRowKey);                int rowIndexTreeTable = richTreeTable.getRowIndex();                //Drop Target Logic                if (((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue()==null) {                  //Get Parent                  newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getParent().getAttributeValue();                } else {                  if (isNum(((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue().toString())) {                    //Get Parent's parent                              newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getParent().getParent().getAttributeValue();                  } else{                      //Dropped on USER                                          newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue();                  }                  }              }           }                     DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);          RowKeySet droppedValue = dropEvent.getTransferable().getData(df);            Object[] keys = droppedValue.toArray();          Key componentKey = null;          Key subComponentKey = null;           // binding for createUserComponents method defined in AppModuleImpl class  to insert record in database.                      operationBinding = bindings.getOperationBinding("createUserComponents");            // get the Product, Component, Subcomponent details and insert to UserComponents table.          // loop through the keys if more than one comp/subcomponent is select.                   for (int i = 0; i < keys.length; i++) {                  System.out.println("in for :"+i);              List list = (List)keys[i];                  System.out.println("list "+i+" : "+list);              System.out.println("list size "+list.size());              if (list.size() == 1) {                                // we cannot drag and drop  the highest node !                                msgtxt="You cannot drop Products, please drop Component or SubComponent from the Tree.";                  System.out.println(msgtxt);                                this.showInfoMessage(msgtxt);              } else {                  if (list.size() == 2) {                    // were doing the first branch, in this case all components.                    componentKey = (Key)list.get(1);                    Object[] droppedProdCompValues = componentKey.getAttributeValues();                    operationBinding.getParamsMap().put("p_bugdbId",newBugdbId);                    operationBinding.getParamsMap().put("p_productId",droppedProdCompValues[0]);                    operationBinding.getParamsMap().put("p_componentId",droppedProdCompValues[1]);                    operationBinding.getParamsMap().put("p_subComponentId","ALL");                    Object result = operationBinding.execute();              } else {                    subComponentKey = (Key)list.get(2);                    Object[] droppedProdCompSubCompValues = subComponentKey.getAttributeValues();                    operationBinding.getParamsMap().put("p_bugdbId",newBugdbId);                    operationBinding.getParamsMap().put("p_productId",droppedProdCompSubCompValues[0]);                    operationBinding.getParamsMap().put("p_componentId",droppedProdCompSubCompValues[1]);                    operationBinding.getParamsMap().put("p_subComponentId",droppedProdCompSubCompValues[2]);                    Object result = operationBinding.execute();                  }                   }            }                        /* this.getCil1().setDisabled(false);            this.getCil1().setPartialSubmit(true); */                      return DnDAction.MOVE;        } catch (Exception ex) {          System.out.println("drop failed with : " + ex.getMessage());          ex.printStackTrace();                  /* this.getCil1().setDisabled(true); */          return DnDAction.NONE;          }    } Run jspx page and drop a Component or Subcomponent from Products Tree to UserComponents Tree Table.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Scripting Asset Creation

    - by The Old Toxophilist
    So far in this series we have looked at creating asset within the EMOC BUI but the Exalogic 2.0.1 installation also provide the Iaas cli as an alternative to most of the common functionality available within EMOC. The IaaS cli interface provides access to the functions that are available to a user logged into the BUI with the CloudUser Role. As such not all functionality is available from the command line interface however having said that the IaaS cli provides all the functionality required to create the Assets within a specific Account (Tenure). Because these action are common and repeatable I decided to wrap the functionality within a simple script that takes a simple input file and creates the Asset. Following the Script through will show us the required steps needed to create the various Assets within an Account and hence I will work through the various functions within the script below describing the steps. You will note from the various steps within the script that it is designed to pause between actions allowing the proceeding action to complete. The reason for this is because we could swamp EMOC with a series of actions and may end up with a situation where we are trying to action a Volume attached before the creation of the vServer and Volume have completed. processAssets() This function simply reads through the passed input file identifying what assets need to be created. An example of the input file can be found below. It can be seen that the input file can be used to create Assets in multiple Accounts during a single run. The order of the entries define the functions that need to be actioned as follows: Input Command Iaas Actions Parameters Production:Connect akm-describe-accounts akm-create-access-key iaas-create-key-pair iaas-describe-vnets iaas-describe-vserver-types iaas-describe-server-templates Username Password Production:Create|vServer iaas-run-vserver vServer Name vServer Type Name Template Name Comma separated list of network names which the vServer will connect to. Comma separated list of IPs for the specified networks. Production:Create|Volume iaas-create-volume Volume Name Volume Size Production:Attach|Volume iaas-attach-volumes-to-vserver vServer Name Comma separated list of volume names Production:Disconnect iaas-delete-key-pair akm-delete-access-key None connectToAccount() It can be seen from the connectToAccount function that before we can execute any Asset creation we must first connect to the appropriate account. To do this we will need the ID associated with the Account. This can be found by executing the akm-describe-accounts cli command which will return a list of all Accounts and there IDs. Once we have the Account ID we generate and Access key using the akm-create-access-key command and then a keypair with the iaas-create-key-pair command. At this point we now have all the information we need to access the specific named account. createVServer() This function simply retrieved the information from the input line and then will create the vServer using the iaas-run-vserver cli command. Reading the function you will notice that it takes the various input names for vServer Type, Template and Networks and converts them into the appropriate IDs. The IaaS cli will not work directly with component names and hence all IDs need to be found. createVolume() Function that simply takes the Volume name and Size then executes the iaas-create-volume command to create the volume. attachVolume() Takes the name of the Volume, which we may have just created, and a Volume then identifies the appropriate IDs before assigning the Volume to the vServer with the iaas-attach-volumes-to-vserver. disconnectFromAccount() Once we have finished connecting to the Account we simply remove the key pair with iaas-delete-key-pair and the access key with akm-delete-access-key although it may be useful to keep this if ssh is required and you do not subsequently modify the sshd information to allow unsecured access. By default the key is required for ssh access when a vServer is created from the command-line. CreateAssets.sh 1 export OCCLI=/opt/sun/occli/bin 2 export IAAS_HOME=/opt/oracle/iaas/cli 3 export JAVA_HOME=/usr/java/latest 4 export IAAS_BASE_URL=https://127.0.0.1 5 export IAAS_ACCESS_KEY_FILE=iaas_access.key 6 export KEY_FILE=iaas_access.pub 7 #CloudUser used to create vServers & Volumes 8 export IAAS_USER=exaprod 9 export IAAS_PASSWORD_FILE=root.pwd 10 export KEY_NAME=cli.recreate 11 export INPUT_FILE=CreateAssets.in 12 13 export ACCOUNTS_FILE=accounts.out 14 export VOLUMES_FILE=volumes.out 15 export DISTGRPS_FILE=distgrp.out 16 export VNETS_FILE=vnets.out 17 export VSERVER_TYPES_FILE=vstype.out 18 export VSERVER_FILE=vserver.out 19 export VSERVER_TEMPLATES=template.out 20 export KEY_PAIRS=keypairs.out 21 22 PROCESSING_ACCOUNT="" 23 24 function cleanTempFiles() { 25 rm -f $ACCOUNTS_FILE $VOLUMES_FILE $DISTGRPS_FILE $VNETS_FILE $VSERVER_TYPES_FILE $VSERVER_FILE $VSERVER_TEMPLATES $KEY_PAIRS $IAAS_PASSWORD_FILE $KEY_FILE $IAAS_ACCESS_KEY_FILE 26 } 27 28 function connectToAccount() { 29 if [[ "$ACCOUNT" != "$PROCESSING_ACCOUNT" ]] 30 then 31 if [[ "" != "$PROCESSING_ACCOUNT" ]] 32 then 33 $IAAS_HOME/bin/iaas-delete-key-pair --key-name $KEY_NAME --access-key-file $IAAS_ACCESS_KEY_FILE 34 $IAAS_HOME/bin/akm-delete-access-key $AK 35 fi 36 PROCESSING_ACCOUNT=$ACCOUNT 37 IAAS_USER=$ACCOUNT_USER 38 echo "$ACCOUNT_PASSWORD" > $IAAS_PASSWORD_FILE 39 $IAAS_HOME/bin/akm-describe-accounts --sep "|" > $ACCOUNTS_FILE 40 while read line 41 do 42 ACCOUNT_ID=${line%%|*} 43 line=${line#*|} 44 ACCOUNT_NAME=${line%%|*} 45 # echo "Id = $ACCOUNT_ID" 46 # echo "Name = $ACCOUNT_NAME" 47 if [[ "$ACCOUNT_NAME" == "$ACCOUNT" ]] 48 then 49 echo "Found Production Account $line" 50 AK=`$IAAS_HOME/bin/akm-create-access-key --account $ACCOUNT_ID --access-key-file $IAAS_ACCESS_KEY_FILE` 51 KEYPAIR=`$IAAS_HOME/bin/iaas-create-key-pair --key-name $KEY_NAME --key-file $KEY_FILE` 52 echo "Connected to $ACCOUNT_NAME" 53 break 54 fi 55 done < $ACCOUNTS_FILE 56 fi 57 } 58 59 function disconnectFromAccount() { 60 $IAAS_HOME/bin/iaas-delete-key-pair --key-name $KEY_NAME --access-key-file $IAAS_ACCESS_KEY_FILE 61 $IAAS_HOME/bin/akm-delete-access-key $AK 62 PROCESSING_ACCOUNT="" 63 } 64 65 function getNetworks() { 66 $IAAS_HOME/bin/iaas-describe-vnets --sep "|" > $VNETS_FILE 67 } 68 69 function getVSTypes() { 70 $IAAS_HOME/bin/iaas-describe-vserver-types --sep "|" > $VSERVER_TYPES_FILE 71 } 72 73 function getTemplates() { 74 $IAAS_HOME/bin/iaas-describe-server-templates --sep "|" > $VSERVER_TEMPLATES 75 } 76 77 function getVolumes() { 78 $IAAS_HOME/bin/iaas-describe-volumes --sep "|" > $VOLUMES_FILE 79 } 80 81 function getVServers() { 82 $IAAS_HOME/bin/iaas-describe-vservers --sep "|" > $VSERVER_FILE 83 } 84 85 function getNetworkId() { 86 while read line 87 do 88 NETWORK_ID=${line%%|*} 89 line=${line#*|} 90 NAME=${line%%|*} 91 if [[ "$NAME" == "$NETWORK_NAME" ]] 92 then 93 break 94 fi 95 done < $VNETS_FILE 96 } 97 98 function getVSTypeId() { 99 while read line 100 do 101 VSTYPE_ID=${line%%|*} 102 line=${line#*|} 103 NAME=${line%%|*} 104 if [[ "$VSTYPE_NAME" == "$NAME" ]] 105 then 106 break 107 fi 108 done < $VSERVER_TYPES_FILE 109 } 110 111 function getTemplateId() { 112 while read line 113 do 114 TEMPLATE_ID=${line%%|*} 115 line=${line#*|} 116 NAME=${line%%|*} 117 if [[ "$TEMPLATE_NAME" == "$NAME" ]] 118 then 119 break 120 fi 121 done < $VSERVER_TEMPLATES 122 } 123 124 function getVolumeId() { 125 while read line 126 do 127 export VOLUME_ID=${line%%|*} 128 line=${line#*|} 129 NAME=${line%%|*} 130 if [[ "$NAME" == "$VOLUME_NAME" ]] 131 then 132 break; 133 fi 134 done < $VOLUMES_FILE 135 } 136 137 function getVServerId() { 138 while read line 139 do 140 VSERVER_ID=${line%%|*} 141 line=${line#*|} 142 NAME=${line%%|*} 143 if [[ "$VSERVER_NAME" == "$NAME" ]] 144 then 145 break; 146 fi 147 done < $VSERVER_FILE 148 } 149 150 function getVServerState() { 151 getVServers 152 while read line 153 do 154 VSERVER_ID=${line%%|*} 155 line=${line#*|} 156 NAME=${line%%|*} 157 line=${line#*|} 158 line=${line#*|} 159 VSERVER_STATE=${line%%|*} 160 if [[ "$VSERVER_NAME" == "$NAME" ]] 161 then 162 break; 163 fi 164 done < $VSERVER_FILE 165 } 166 167 function pauseUntilVServerRunning() { 168 # Wait until the Server is running before creating the next 169 getVServerState 170 while [[ "$VSERVER_STATE" != "RUNNING" ]] 171 do 172 getVServerState 173 echo "$NAME $VSERVER_STATE" 174 if [[ "$VSERVER_STATE" != "RUNNING" ]] 175 then 176 echo "Sleeping......." 177 sleep 60 178 fi 179 if [[ "$VSERVER_STATE" == "FAILED" ]] 180 then 181 echo "Will Delete $NAME in 5 Minutes....." 182 sleep 300 183 deleteVServer 184 echo "Deleted $NAME waiting 5 Minutes....." 185 sleep 300 186 break 187 fi 188 done 189 # Lets pause for a minute or two 190 echo "Just Chilling......" 191 sleep 60 192 echo "Ahhhhh we're getting there......." 193 sleep 60 194 echo "I'm almost at one with the universe......." 195 sleep 60 196 echo "Bong Reality Check !" 197 } 198 199 function deleteVServer() { 200 $IAAS_HOME/bin/iaas-terminate-vservers --force --vserver-ids $VSERVER_ID 201 } 202 203 function createVServer() { 204 VSERVER_NAME=${ASSET_DETAILS%%|*} 205 ASSET_DETAILS=${ASSET_DETAILS#*|} 206 VSTYPE_NAME=${ASSET_DETAILS%%|*} 207 ASSET_DETAILS=${ASSET_DETAILS#*|} 208 TEMPLATE_NAME=${ASSET_DETAILS%%|*} 209 ASSET_DETAILS=${ASSET_DETAILS#*|} 210 NETWORK_NAMES=${ASSET_DETAILS%%|*} 211 ASSET_DETAILS=${ASSET_DETAILS#*|} 212 IP_ADDRESSES=${ASSET_DETAILS%%|*} 213 # Get Ids associated with names 214 getVSTypeId 215 getTemplateId 216 # Convert Network Names to Ids 217 NETWORK_IDS="" 218 while true 219 do 220 NETWORK_NAME=${NETWORK_NAMES%%,*} 221 NETWORK_NAMES=${NETWORK_NAMES#*,} 222 getNetworkId 223 if [[ "$NETWORK_IDS" != "" ]] 224 then 225 NETWORK_IDS="$NETWORK_IDS,$NETWORK_ID" 226 else 227 NETWORK_IDS=$NETWORK_ID 228 fi 229 if [[ "$NETWORK_NAME" == "$NETWORK_NAMES" ]] 230 then 231 break 232 fi 233 done 234 # Create vServer 235 echo "About to execute : $IAAS_HOME/bin/iaas-run-vserver --name $VSERVER_NAME --key-name $KEY_NAME --vserver-type $VSTYPE_ID --server-template-id $TEMPLATE_ID --vnets $NETWORK_IDS --ip-addresses $IP_ADDRESSES" 236 $IAAS_HOME/bin/iaas-run-vserver --name $VSERVER_NAME --key-name $KEY_NAME --vserver-type $VSTYPE_ID --server-template-id $TEMPLATE_ID --vnets $NETWORK_IDS --ip-addresses $IP_ADDRESSES 237 pauseUntilVServerRunning 238 } 239 240 function createVolume() { 241 VOLUME_NAME=${ASSET_DETAILS%%|*} 242 ASSET_DETAILS=${ASSET_DETAILS#*|} 243 VOLUME_SIZE=${ASSET_DETAILS%%|*} 244 # Create Volume 245 echo "About to execute : $IAAS_HOME/bin/iaas-create-volume --name $VOLUME_NAME --size $VOLUME_SIZE" 246 $IAAS_HOME/bin/iaas-create-volume --name $VOLUME_NAME --size $VOLUME_SIZE 247 # Lets pause 248 echo "Just Waiting 30 Seconds......" 249 sleep 30 250 } 251 252 function attachVolume() { 253 VSERVER_NAME=${ASSET_DETAILS%%|*} 254 ASSET_DETAILS=${ASSET_DETAILS#*|} 255 VOLUME_NAMES=${ASSET_DETAILS%%|*} 256 # Get vServer Id 257 getVServerId 258 # Convert Volume Names to Ids 259 VOLUME_IDS="" 260 while true 261 do 262 VOLUME_NAME=${VOLUME_NAMES%%,*} 263 VOLUME_NAMES=${VOLUME_NAMES#*,} 264 getVolumeId 265 if [[ "$VOLUME_IDS" != "" ]] 266 then 267 VOLUME_IDS="$VOLUME_IDS,$VOLUME_ID" 268 else 269 VOLUME_IDS=$VOLUME_ID 270 fi 271 if [[ "$VOLUME_NAME" == "$VOLUME_NAMES" ]] 272 then 273 break 274 fi 275 done 276 # Attach Volumes 277 echo "About to execute : $IAAS_HOME/bin/iaas-attach-volumes-to-vserver --vserver-id $VSERVER_ID --volume-ids $VOLUME_IDS" 278 $IAAS_HOME/bin/iaas-attach-volumes-to-vserver --vserver-id $VSERVER_ID --volume-ids $VOLUME_IDS 279 # Lets pause 280 echo "Just Waiting 30 Seconds......" 281 sleep 30 282 } 283 284 function processAssets() { 285 while read line 286 do 287 ACCOUNT=${line%%:*} 288 line=${line#*:} 289 ACTION=${line%%|*} 290 line=${line#*|} 291 if [[ "$ACTION" == "Connect" ]] 292 then 293 ACCOUNT_USER=${line%%|*} 294 line=${line#*|} 295 ACCOUNT_PASSWORD=${line%%|*} 296 connectToAccount 297 298 ## Account Info 299 getNetworks 300 getVSTypes 301 getTemplates 302 303 continue 304 fi 305 if [[ "$ACTION" == "Create" ]] 306 then 307 ASSET=${line%%|*} 308 line=${line#*|} 309 ASSET_DETAILS=$line 310 if [[ "$ASSET" == "vServer" ]] 311 then 312 createVServer 313 314 continue 315 fi 316 if [[ "$ASSET" == "Volume" ]] 317 then 318 createVolume 319 320 continue 321 fi 322 fi 323 if [[ "$ACTION" == "Attach" ]] 324 then 325 ASSET=${line%%|*} 326 line=${line#*|} 327 ASSET_DETAILS=$line 328 if [[ "$ASSET" == "Volume" ]] 329 then 330 getVolumes 331 getVServers 332 attachVolume 333 334 continue 335 fi 336 fi 337 if [[ "$ACTION" == "Connect" ]] 338 then 339 disconnectFromAccount 340 341 continue 342 fi 343 done < $INPUT_FILE 344 } 345 346 # Should Parameterise this 347 348 while [ $# -gt 0 ] 349 do 350 case "$1" in 351 -a) INPUT_FILE="$2"; shift;; 352 *) echo ""; echo >&2 \ 353 "usage: $0 [-a <Asset Definition File>] (Default is CreateAssets.in)" 354 echo""; exit 1;; 355 *) break;; 356 esac 357 shift 358 done 359 360 361 362 363 processAssets 364 365 echo "**************************************" 366 echo "***** Finished Creating Assets *****" 367 echo "**************************************" 368 CreateAssetsProd.in Production:Connect|exaprod|welcome1 Production:Create|vServer|VS006|VSTProduction|BaseOEL56ServerTemplate|EoIB-otd-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.223.13,192.168.0.13,10.117.81.67,172.17.0.14 Production:Create|vServer|VS007|VSTProduction|BaseOEL56ServerTemplate|EoIB-otd-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.223.14,192.168.0.14,10.117.81.68,172.17.0.15 Production:Create|vServer|VS008|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.61,192.168.0.61,10.117.81.61,172.17.0.16 Production:Create|vServer|VS009|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.62,192.168.0.62,10.117.81.62,172.17.0.17 Production:Create|vServer|VS000|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.63,192.168.0.63,10.117.81.63,172.17.0.18 Production:Create|vServer|VS001|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.64,192.168.0.64,10.117.81.64,172.17.0.19 Production:Create|vServer|VS002|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.65,192.168.0.65,10.117.81.65,172.17.0.20 Production:Create|vServer|VS003|VSTProduction|BaseOEL56ServerTemplate|EoIB-wls-prod,vn-prod-web,IPoIB-default,IPoIB-vserver-shared-storage|10.51.225.66,192.168.0.66,10.117.81.66,172.17.0.21 Production:Create|Volume|VS006|50 Production:Create|Volume|VS007|50 Production:Create|Volume|VS008|50 Production:Create|Volume|VS009|50 Production:Create|Volume|VS000|50 Production:Create|Volume|VS001|50 Production:Create|Volume|VS002|50 Production:Create|Volume|VS003|50 Production:Attach|Volume|VS006|VS006 Production:Attach|Volume|VS007|VS007 Production:Attach|Volume|VS008|VS008 Production:Attach|Volume|VS009|VS009 Production:Attach|Volume|VS000|VS000 Production:Attach|Volume|VS001|VS001 Production:Attach|Volume|VS002|VS002 Production:Attach|Volume|VS003|VS003 Production:Disconnect Development:Connect|exadev|welcome1 Development:Create|vServer|VS014|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.24,10.117.81.71,172.17.0.24 Development:Create|vServer|VS015|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.25,10.117.81.72,172.17.0.25 Development:Create|vServer|VS016|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.26,10.117.81.73,172.17.0.26 Development:Create|vServer|VS017|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.27,10.117.81.74,172.17.0.27 Development:Create|vServer|VS018|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.28,10.117.81.75,172.17.0.28 Development:Create|vServer|VS019|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.29,10.117.81.76,172.17.0.29 Development:Create|vServer|VS020|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.30,10.117.81.77,172.17.0.30 Development:Create|vServer|VS021|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.31,10.117.81.78,172.17.0.31 Development:Create|vServer|VS022|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.32,10.117.81.79,172.17.0.32 Development:Create|vServer|VS023|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.33,10.117.81.80,172.17.0.33 Development:Create|vServer|VS024|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.34,10.117.81.81,172.17.0.34 Development:Create|vServer|VS025|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.35,10.117.81.82,172.17.0.35 Development:Create|vServer|VS026|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.36,10.117.81.83,172.17.0.36 Development:Create|vServer|VS027|VSTDevelopment|BaseOEL56ServerTemplate|EoIB-development,IPoIB-default,IPoIB-vserver-shared-storage|10.51.224.37,10.117.81.84,172.17.0.37 Development:Create|Volume|VS014|50 Development:Create|Volume|VS015|50 Development:Create|Volume|VS016|50 Development:Create|Volume|VS017|50 Development:Create|Volume|VS018|50 Development:Create|Volume|VS019|50 Development:Create|Volume|VS020|50 Development:Create|Volume|VS021|50 Development:Create|Volume|VS022|50 Development:Create|Volume|VS023|50 Development:Create|Volume|VS024|50 Development:Create|Volume|VS025|50 Development:Create|Volume|VS026|50 Development:Create|Volume|VS027|50 Development:Attach|Volume|VS014|VS014 Development:Attach|Volume|VS015|VS015 Development:Attach|Volume|VS016|VS016 Development:Attach|Volume|VS017|VS017 Development:Attach|Volume|VS018|VS018 Development:Attach|Volume|VS019|VS019 Development:Attach|Volume|VS020|VS020 Development:Attach|Volume|VS021|VS021 Development:Attach|Volume|VS022|VS022 Development:Attach|Volume|VS023|VS023 Development:Attach|Volume|VS024|VS024 Development:Attach|Volume|VS025|VS025 Development:Attach|Volume|VS026|VS026 Development:Attach|Volume|VS027|VS027 Development:Disconnect This entry was originally posted on the The Old Toxophilist Site.

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >