Search Results

Search found 15778 results on 632 pages for 'auto complete'.

Page 99/632 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • android camera preview blank screen

    - by user1104836
    I want to capture a photo manually not by using existing camera apps. So i made this Activity: public class CameraActivity extends Activity { private Camera mCamera; private Preview mPreview; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_camera); // Create an instance of Camera // mCamera = Camera.open(0); // Create our Preview view and set it as the content of our activity. mPreview = new Preview(this); mPreview.safeCameraOpen(0); FrameLayout preview = (FrameLayout) findViewById(R.id.camera_preview); preview.addView(mPreview); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.activity_camera, menu); return true; } } This is the CameraPreview which i want to use: public class Preview extends ViewGroup implements SurfaceHolder.Callback { SurfaceView mSurfaceView; SurfaceHolder mHolder; //CAM INSTANCE ================================ private Camera mCamera; List<Size> mSupportedPreviewSizes; //============================================= /** * @param context */ public Preview(Context context) { super(context); mSurfaceView = new SurfaceView(context); addView(mSurfaceView); // Install a SurfaceHolder.Callback so we get notified when the // underlying surface is created and destroyed. mHolder = mSurfaceView.getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } /** * @param context * @param attrs */ public Preview(Context context, AttributeSet attrs) { super(context, attrs); // TODO Auto-generated constructor stub } /** * @param context * @param attrs * @param defStyle */ public Preview(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); // TODO Auto-generated constructor stub } /* (non-Javadoc) * @see android.view.ViewGroup#onLayout(boolean, int, int, int, int) */ @Override protected void onLayout(boolean arg0, int arg1, int arg2, int arg3, int arg4) { // TODO Auto-generated method stub } @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { // Now that the size is known, set up the camera parameters and begin // the preview. Camera.Parameters parameters = mCamera.getParameters(); parameters.setPreviewSize(width, height); requestLayout(); mCamera.setParameters(parameters); /* Important: Call startPreview() to start updating the preview surface. Preview must be started before you can take a picture. */ mCamera.startPreview(); } @Override public void surfaceCreated(SurfaceHolder holder) { // TODO Auto-generated method stub } @Override public void surfaceDestroyed(SurfaceHolder holder) { // Surface will be destroyed when we return, so stop the preview. if (mCamera != null) { /* Call stopPreview() to stop updating the preview surface. */ mCamera.stopPreview(); } } /** * When this function returns, mCamera will be null. */ private void stopPreviewAndFreeCamera() { if (mCamera != null) { /* Call stopPreview() to stop updating the preview surface. */ mCamera.stopPreview(); /* Important: Call release() to release the camera for use by other applications. Applications should release the camera immediately in onPause() (and re-open() it in onResume()). */ mCamera.release(); mCamera = null; } } public void setCamera(Camera camera) { if (mCamera == camera) { return; } stopPreviewAndFreeCamera(); mCamera = camera; if (mCamera != null) { List<Size> localSizes = mCamera.getParameters().getSupportedPreviewSizes(); mSupportedPreviewSizes = localSizes; requestLayout(); try { mCamera.setPreviewDisplay(mHolder); } catch (IOException e) { e.printStackTrace(); } /* Important: Call startPreview() to start updating the preview surface. Preview must be started before you can take a picture. */ mCamera.startPreview(); } } public boolean safeCameraOpen(int id) { boolean qOpened = false; try { releaseCameraAndPreview(); mCamera = Camera.open(id); mCamera.startPreview(); qOpened = (mCamera != null); } catch (Exception e) { // Log.e(R.string.app_name, "failed to open Camera"); e.printStackTrace(); } return qOpened; } Camera.PictureCallback mPicCallback = new Camera.PictureCallback() { @Override public void onPictureTaken(byte[] data, Camera camera) { // TODO Auto-generated method stub Log.d("lala", "pic is taken"); } }; public void takePic() { mCamera.startPreview(); // mCamera.takePicture(null, mPicCallback, mPicCallback); } private void releaseCameraAndPreview() { this.setCamera(null); if (mCamera != null) { mCamera.release(); mCamera = null; } } } This is the Layout of CameraActivity: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".CameraActivity" > <FrameLayout android:id="@+id/camera_preview" android:layout_width="0dip" android:layout_height="fill_parent" android:layout_weight="1" /> <Button android:id="@+id/button_capture" android:text="Capture" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" /> </LinearLayout> But when i start the CameraActivity, i just see a blank white background and the Capture-Button??? Why i dont see the Camera-Screen?

    Read the article

  • Need help on Coda slider tabs to move inside an overflow:hidden div

    - by Reden
    I'm not too good at javascript. and hope I can get a bit of help. I'm using Coda Slider 2.0, and have designed it to where the tabs are another slider to the right of the main slider, and each item. Basically like this mootools plugin http://landofcoder.com/demo/mootool/lofslidernews/index2.1.html Problem is the items will not scroll. How do I make the items (or tabs to the right) scroll down as the slider rotates? Otherwise the slider will show the 4th slide but not scroll to the 4th item on the right, but Thanks everyone. Here is the Coda-Slider plugin: // when the DOM is ready... $(document).ready(function () { var $panels = $('#slider .scrollContainer > div'); var $container = $('#slider .scrollContainer'); // if false, we'll float all the panels left and fix the width // of the container var horizontal = true; // float the panels left if we're going horizontal if (horizontal) { $panels.css({ 'float' : 'left', 'position' : 'relative' // IE fix to ensure overflow is hidden }); // calculate a new width for the container (so it holds all panels) $container.css('width', $panels[0].offsetWidth * $panels.length); } // collect the scroll object, at the same time apply the hidden overflow // to remove the default scrollbars that will appear var $scroll = $('#slider .scroll').css('overflow', 'hidden'); // apply our left + right buttons $scroll .before('<img class="scrollButtons left" src="images/scroll_left.png" />') .after('<img class="scrollButtons right" src="images/scroll_right.png" />'); // handle nav selection function selectNav() { $(this) .parents('ul:first') .find('a') .removeClass('selected') .end() .end() .addClass('selected'); } $('#slider .navigation').find('a').click(selectNav); // go find the navigation link that has this target and select the nav function trigger(data) { var el = $('#slider .navigation').find('a[href$="' + data.id + '"]').get(0); selectNav.call(el); } if (window.location.hash) { trigger({ id : window.location.hash.substr(1) }); } else { $('ul.navigation a:first').click(); } // offset is used to move to *exactly* the right place, since I'm using // padding on my example, I need to subtract the amount of padding to // the offset. Try removing this to get a good idea of the effect var offset = parseInt((horizontal ? $container.css('paddingTop') : $container.css('paddingLeft')) || 0) * -1; var scrollOptions = { target: $scroll, // the element that has the overflow // can be a selector which will be relative to the target items: $panels, navigation: '.navigation a', // selectors are NOT relative to document, i.e. make sure they're unique prev: 'img.left', next: 'img.right', // allow the scroll effect to run both directions axis: 'xy', onAfter: trigger, // our final callback offset: offset, // duration of the sliding effect duration: 500, // easing - can be used with the easing plugin: // http://gsgd.co.uk/sandbox/jquery/easing/ easing: 'swing' }; // apply serialScroll to the slider - we chose this plugin because it // supports// the indexed next and previous scroll along with hooking // in to our navigation. $('#slider').serialScroll(scrollOptions); // now apply localScroll to hook any other arbitrary links to trigger // the effect $.localScroll(scrollOptions); // finally, if the URL has a hash, move the slider in to position, // setting the duration to 1 because I don't want it to scroll in the // very first page load. We don't always need this, but it ensures // the positioning is absolutely spot on when the pages loads. scrollOptions.duration = 1; $.localScroll.hash(scrollOptions); /////////////////////////////////////////////// // autoscroll /////////////////////////////////////////////// // start to automatically cycle the tabs cycleTimer = setInterval(function () { $scroll.trigger('next'); }, 2000); // how many milliseconds, change this to whatever you like // select some trigger elements to stop the auto-cycle var $stopTriggers = $('#slider .navigation').find('a') // tab headers .add('.scroll') // panel itself .add('.stopscroll') // links to the stop class div .add('.navigation') // links to navigation id for tabs .add("a[href^='#']"); // links to a tab // this is the function that will stop the auto-cycle function stopCycle() { // remove the no longer needed stop triggers clearInterval(cycleTimer); // stop the auto-cycle itself $buttons.show(); // show the navigation buttons document.getElementById('stopscroll').style.display='none'; // hide the stop div document.getElementById('startscroll').style.display='block'; // block the start div } // bind stop cycle function to the click event using namespaces $stopTriggers.bind('click.cycle', stopCycle); /////////////////////////////////////////////// // end autoscroll /////////////////////////////////////////////// // edit to start again /////////////////////////////////////////////// // select some trigger elements to stop the auto-cycle var $startTriggers_start = $('#slider .navigation').find('a') // tab headers .add('.startscroll'); // links to the start class div // this is the function that will stop the auto-cycle function startCycle() { // remove the no longer needed stop triggers $buttons.hide(); // show the navigation buttons $scroll.trigger('next'); // directly to the next first cycleTimer = setInterval(function () { // now set timer again $scroll.trigger('next'); }, 5000); // how many milliseconds, change this to whatever you like document.getElementById('stopscroll').style.display='block'; // block the stop div document.getElementById('startscroll').style.display='none'; // hide the start div } // bind stop cycle function to the click event using namespaces $startTriggers_start.bind('click.cycle', startCycle); /////////////////////////////////////////////// // end edit to start /////////////////////////////////////////////// });

    Read the article

  • javascript robot

    - by sarah
    hey guys! I need help making this robot game in javascript (notepad++) please HELP! I'm really confused by the functions <html> <head><title>Robot Invasion 2199</title></head> <body style="text-align:center" onload="newGame();"> <h2>Robot Invasion 2199</h2> <div style="text-align:center; background:white; margin-right: auto; margin-left:auto;"> <div style=""> <div style="width: auto; border:solid thin red; text-align:center; margin:10px auto 10px auto; padding:1ex 0ex;font-family: monospace" id="scene"></pre> </div> <div><span id="status"></span></div> <form style="text-align:center"> PUT THE CONTROL PANEL HERE!!! </form> </div> <script type="text/javascript"> // GENERAL SUGGESTIONS ABOUT WRITING THIS PROGRAM: // You should test your program before you've finished writing all of the // functions. The newGame, startLevel, and update functions should be your // first priority since they're all involved in displaying the initial state // of the game board. // // Next, work on putting together the control panel for the game so that you // can begin to interact with it. Your next goal should be to get the move // function working so that everything else can be testable. Note that all nine // of the movement buttons (including the pass button) should call the move // function when they are clicked, just with different parameters. // // All the remaining functions can be completed in pretty much any order, and // you'll see the game gradually improve as you write the functions. // // Just remember to keep your cool when writing this program. There are a // bunch of functions to write, but as long as you stay focused on the function // you're writing, each individual part is not that hard. // These variables specify the number of rows and columns in the game board. // Use these variables instead of hard coding the number of rows and columns // in your loops, etc. // i.e. Write: // for(i = 0; i < NUM_ROWS; i++) ... // not: // for(i = 0; i < 15; i++) ... var NUM_ROWS = 15; var NUM_COLS = 25; // Scene is arguably the most important variable in this whole program. It // should be set up as a two-dimensional array (with NUM_ROWS rows and // NUM_COLS columns). This represents the game board, with the scene[i][j] // representing what's in row i, column j. In particular, the entries should // be: // // "." for empty space // "R" for a robot // "S" for a scrap pile // "H" for the hero var scene; // These variables represent the row and column of the hero's location, // respectively. These are more of a conveniece so you don't have to search // for the "H" in the scene array when you need to know where the hero is. var heroRow; var heroCol; // These variables keep track of various aspects of the gameplay. // score is just the number of robots destroyed. // screwdrivers is the number of sonic screwdriver charges left. // fastTeleports is the number of fast teleports remaining. // level is the current level number. // Be sure to reset all of these when a new game starts, and update them at the // appropriate times. var score; var screwdrivers; var fastTeleports; var level; // This function should use a sonic screwdriver if there are still charges // left. The sonic screwdriver turns any robot that is in one of the eight // squares immediately adjacent to the hero into scrap. If there are no charges // left, then this function should instead pop up a dialog box with the message // "Out of sonic screwdrivers!". As with any function that alters the game's // state, this function should call the update function when it has finished. // // Your "Sonic Screwdriver" button should call this function directly. function screwdriver() { // WRITE THIS FUNCTION } // This function should move the hero to a randomly selected location if there // are still fast teleports left. This function MUST NOT move the hero on to // a square that is already occupied by a robot or a scrap pile, although it // can move the hero next to a robot. The number of fast teleports should also // be decreased by one. If there are no fast teleports left, this function // should just pop up a message box saying so. As with any function that alters // the game's state, this function should call the update function when it has // finished. // // HINT: Have a loop that keeps trying random spots until a valid one is found. // HINT: Use the validPosition function to tell if a spot is valid // // Your "Fast Teleport" button s

    Read the article

  • Adventures in Windows 8: Working around the navigation animation issues in LayoutAwarePage

    - by Laurent Bugnion
    LayoutAwarePage is a pretty cool add-on to Windows 8 apps, which facilitates greatly the implementation of orientation-aware (portrait, landscape) as well as state-aware (snapped, filled, fullscreen) apps. It has however a few issues that are obvious when you use transformed elements on your page. Adding a LayoutAwarePage to your application If you start with a blank app, the MainPage is a vanilla Page, with no such feature. In order to have a LayoutAwarePage into your app, you need to add this class (and a few helpers) with the following operation: Right click on the Solution and select Add, New Item from the context menu. From the dialog, select a Basic Page (not a Blank Page, which is another vanilla page). If you prefer, you can also use Split Page, Items Page, Item Detail Page, Grouped Items Page or Group Detail Page which are all LayoutAwarePages. Personally I like to start with a Basic Page, which gives me more creative freedom. Adding this new page will cause Visual Studio to show a prompt asking you for permission to add additional helper files to the Common folder. One of these helpers in the LayoutAwarePage class, which is where the magic happens. LayoutAwarePage offers some help for the detection of orientation and state (which makes it a pleasure to design for all these scenarios in Blend, by the way) as well as storage for the navigation state (more about that in a future article). Issue with LayoutAwarePage When you use UI elements such as a background picture, a watermark label, logos, etc, it is quite common to do a few things with those: Making them partially transparent (this is especially true for background pictures; for instance I really like a black Page background with a half transparent picture placed on top of it). Transforming them, for instance rotating them a bit, scaling them, etc. Here is an example with a picture of my two beautiful daughters in the Bird Park in Kuala Lumpur, as well as a transformed TextBlock. The image has an opacity of 40% and the TextBlock a simple RotateTransform. If I create an application with a MainPage that navigates to this LayoutAwarePage, however, I will have a very annoying effect: The background picture appears with an Opacity of 100%. The TextBlock is not rotated. This lasts only for less than a second (during the navigation animation) before the elements “snap into place” and get their desired effect. Here is the XAML that cause the annoying effect: <common:LayoutAwarePage x:Name="pageRoot" x:Class="App13.BasicPage1" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:common="using:App13.Common" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d"> <Grid Style="{StaticResource LayoutRootStyle}"> <Grid.RowDefinitions> <RowDefinition Height="140" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Image Source="Assets/el20120812025.jpg" Stretch="UniformToFill" Opacity="0.4" Grid.RowSpan="2" /> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button x:Name="backButton" Click="GoBack" IsEnabled="{Binding Frame.CanGoBack, ElementName=pageRoot}" Style="{StaticResource BackButtonStyle}" /> <TextBlock x:Name="pageTitle" Grid.Column="1" Text="Welcome" Style="{StaticResource PageHeaderTextStyle}" /> </Grid> <TextBlock HorizontalAlignment="Center" TextWrapping="Wrap" Text="Welcome to my Windows 8 Application" Grid.Row="1" VerticalAlignment="Bottom" FontFamily="Segoe UI Light" FontSize="70" FontWeight="Light" TextAlignment="Center" Foreground="#FFFFA200" RenderTransformOrigin="0.5,0.5" UseLayoutRounding="False" d:LayoutRounding="Auto" Margin="0,0,0,153"> <TextBlock.RenderTransform> <CompositeTransform Rotation="-6.545" /> </TextBlock.RenderTransform> </TextBlock> <VisualStateManager.VisualStateGroups> [...] </VisualStateManager.VisualStateGroups> </Grid> </common:LayoutAwarePage> Solving the issue In order to solve this “snapping” issue, the solution is to wrap the elements that are transformed into an empty Grid. Honestly, to me it sounds like a bug in the LayoutAwarePage navigation animation, but thankfully the workaround is not that difficult: Simple change the main Grid as follows: <Grid Style="{StaticResource LayoutRootStyle}"> <Grid.RowDefinitions> <RowDefinition Height="140" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Grid Grid.RowSpan="2"> <Image Source="Assets/el20120812025.jpg" Stretch="UniformToFill" Opacity="0.4" /> </Grid> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button x:Name="backButton" Click="GoBack" IsEnabled="{Binding Frame.CanGoBack, ElementName=pageRoot}" Style="{StaticResource BackButtonStyle}" /> <TextBlock x:Name="pageTitle" Grid.Column="1" Text="Welcome" Style="{StaticResource PageHeaderTextStyle}" /> </Grid> <Grid Grid.Row="1"> <TextBlock HorizontalAlignment="Center" TextWrapping="Wrap" Text="Welcome to my Windows 8 Application" VerticalAlignment="Bottom" FontFamily="Segoe UI Light" FontSize="70" FontWeight="Light" TextAlignment="Center" Foreground="#FFFFA200" RenderTransformOrigin="0.5,0.5" UseLayoutRounding="False" d:LayoutRounding="Auto" Margin="0,0,0,153"> <TextBlock.RenderTransform> <CompositeTransform Rotation="-6.545" /> </TextBlock.RenderTransform> </TextBlock> </Grid> <VisualStateManager.VisualStateGroups> [...] </Grid> Hopefully this will help a few people, I banged my head on the wall for a while before someone at Microsoft pointed me to the solution ;) Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Error when call 'qb.query(db, projection, selection, selectionArgs, null, null, orderBy);'

    - by smalltalk1960s
    Hi all, I make a content provider named 'DictionaryProvider' (Based on NotepadProvider). When my program run to command 'qb.query(db, projection, selection, selectionArgs, null, null, orderBy);', error happen. I don't know how to fix. please help me. Below is my code // file main calling DictionnaryProvider @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.dictionary); final String[] PROJECTION = new String[] { DicColumns._ID, // 0 DicColumns.KEY_WORD, // 1 DicColumns.KEY_DEFINITION // 2 }; Cursor c = managedQuery(DicColumns.CONTENT_URI, PROJECTION, null, null, DicColumns.DEFAULT_SORT_ORDER); String str = ""; if (c.moveToFirst()) { int wordColumn = c.getColumnIndex("KEY_WORD"); int defColumn = c.getColumnIndex("KEY_DEFINITION"); do { // Get the field values str = ""; str += c.getString(wordColumn); str +="\n"; str +=c.getString(defColumn); } while (c.moveToNext()); } Toast.makeText(this, str, Toast.LENGTH_SHORT).show(); } // file DictionaryProvider.java package com.example.helloandroid; import java.util.HashMap; import android.content.ContentProvider; import android.content.ContentValues; import android.content.UriMatcher; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteQueryBuilder; import android.net.Uri; import android.text.TextUtils; import com.example.helloandroid.Dictionary.DicColumns; public class DictionaryProvider extends ContentProvider { //private static final String TAG = "DictionaryProvider"; private DictionaryOpenHelper dbdic; static final int DATABASE_VERSION = 1; static final String DICTIONARY_DATABASE_NAME = "dictionarydb"; static final String DICTIONARY_TABLE_NAME = "dictionary"; private static final UriMatcher sUriMatcher; private static HashMap<String, String> sDicProjectionMap; @Override public int delete(Uri arg0, String arg1, String[] arg2) { // TODO Auto-generated method stub return 0; } @Override public String getType(Uri arg0) { // TODO Auto-generated method stub return null; } @Override public Uri insert(Uri arg0, ContentValues arg1) { // TODO Auto-generated method stub return null; } @Override public boolean onCreate() { // TODO Auto-generated method stub dbdic = new DictionaryOpenHelper(getContext(), DICTIONARY_DATABASE_NAME, null, DATABASE_VERSION); return true; } @Override public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { SQLiteQueryBuilder qb = new SQLiteQueryBuilder(); qb.setTables(DICTIONARY_TABLE_NAME); switch (sUriMatcher.match(uri)) { case 1: qb.setProjectionMap(sDicProjectionMap); break; case 2: qb.setProjectionMap(sDicProjectionMap); qb.appendWhere(DicColumns._ID + "=" + uri.getPathSegments().get(1)); break; default: throw new IllegalArgumentException("Unknown URI " + uri); } // If no sort order is specified use the default String orderBy; if (TextUtils.isEmpty(sortOrder)) { orderBy = DicColumns.DEFAULT_SORT_ORDER; } else { orderBy = sortOrder; } // Get the database and run the query SQLiteDatabase db = dbdic.getReadableDatabase(); Cursor c = qb.query(db, projection, selection, selectionArgs, null, null, orderBy); // Tell the cursor what uri to watch, so it knows when its source data changes c.setNotificationUri(getContext().getContentResolver(), uri); return c; } @Override public int update(Uri uri, ContentValues values, String selection, String[] selectionArgs) { // TODO Auto-generated method stub return 0; } static { sUriMatcher = new UriMatcher(UriMatcher.NO_MATCH); sUriMatcher.addURI(Dictionary.AUTHORITY, "dictionary", 1); sUriMatcher.addURI(Dictionary.AUTHORITY, "dictionary/#", 2); sDicProjectionMap = new HashMap<String, String>(); sDicProjectionMap.put(DicColumns._ID, DicColumns._ID); sDicProjectionMap.put(DicColumns.KEY_WORD, DicColumns.KEY_WORD); sDicProjectionMap.put(DicColumns.KEY_DEFINITION, DicColumns.KEY_DEFINITION); // Support for Live Folders. /*sLiveFolderProjectionMap = new HashMap<String, String>(); sLiveFolderProjectionMap.put(LiveFolders._ID, NoteColumns._ID + " AS " + LiveFolders._ID); sLiveFolderProjectionMap.put(LiveFolders.NAME, NoteColumns.TITLE + " AS " + LiveFolders.NAME);*/ // Add more columns here for more robust Live Folders. } } // file Dictionary.java package com.example.helloandroid; import android.net.Uri; import android.provider.BaseColumns; /** * Convenience definitions for DictionaryProvider */ public final class Dictionary { public static final String AUTHORITY = "com.example.helloandroid.provider.Dictionary"; // This class cannot be instantiated private Dictionary() {} /** * Dictionary table */ public static final class DicColumns implements BaseColumns { // This class cannot be instantiated private DicColumns() {} /** * The content:// style URL for this table */ public static final Uri CONTENT_URI = Uri.parse("content://" + AUTHORITY + "/dictionary"); /** * The MIME type of {@link #CONTENT_URI} providing a directory of notes. */ //public static final String CONTENT_TYPE = "vnd.android.cursor.dir/vnd.google.note"; /** * The MIME type of a {@link #CONTENT_URI} sub-directory of a single note. */ //public static final String CONTENT_ITEM_TYPE = "vnd.android.cursor.item/vnd.google.note"; /** * The default sort order for this table */ public static final String DEFAULT_SORT_ORDER = "modified DESC"; /** * The key_word of the dictionary * <P>Type: TEXT</P> */ public static final String KEY_WORD = "KEY_WORD"; /** * The key_definition of word * <P>Type: TEXT</P> */ public static final String KEY_DEFINITION = "KEY_DEFINITION"; } } thanks so much

    Read the article

  • how to define div or table cell height depending on the height of other divs or cells

    - by John
    I want to have a web page that contains 3 parts: A header at the top of the page , a footer (both of which having specific height in px)and the main part of the page which should be a div or table cell with the appropriate height attribute in order to take all the available space between them. I want the page to take 100% of the browser window height, trying to avoid scrollbars. The problems I have are the following: USING DIVs a) If I set the maindiv height to 100%, the page overflows and I get a vertical scrolbar. (the maindiv's height is set to the 100% of the browser window) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <style type="text/css"> <!-- body, html{ height: 100%; max-height:100%; width: 100%; margin:0; padding:0; } div{padding:0;margin:0;} #containerdiv{height:100%;width:100%;background-color:#FF9;border:0;} #headerdiv{height:150px;width:100%;background-color:#0F0;border:0;} #footerdiv{height:50px;width:100%;background-color:#00F;border:0; } #maindiv{ background-color:#F00; height:100%; } div{border:#000 medium solid;border:0;} </style> <body> <div id="containerdiv"> <div id="headerdiv">headerdiv</div> <div id="maindiv">maindiv</div> <div id="footerdiv">footerdiv</div> </div> </body> </html> b) If I set the maindiv height to auto, the maindiv height is depending on it's content, which is not what I want. USING tables a) If I set the main cell height to 100% it works fine with Firefox but in Internet Explorer 8 I get a vertical scrollbar (you can use the next code block using th style="height:100%" instead of "auto" to see this.) b) If I set the main cell to auto it seems to be working both in IE and FF but then I have the problem that anything I put inside the maincell (table or div) cannot get maincell's full height in IE. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <style type="text/css"> <!-- body, html, table{ height: 100%; width: 100%; margin:0; padding:0; } table{border:#000 0px solid} </style> <body> <table style="background:#063" cellpadding="0" cellspacing="0" border="0"> <tr><th style="height:150px;background-color:#FF0"></th></tr> <tr> <th style="height:auto"><table style="background:#0FF;" cellpadding="0" cellspacing="0" border="0"><tr><th style="height:auto">nested cell</th></tr></table> </th> </tr> <tr><th style="height:50px;background-color:#FF0"></th></tr> </table> </body> </html> </html> Any ideas? Maybe there is an easier way to define the size of the main part of the page in px using javascript? (my javascript skills are pretty poor so any help with this is welcome!)

    Read the article

  • LVM / Device Mapper maps wrong device

    - by DaDaDom
    Hi, I run a LVM setup on a raid1 created by mdadm. md2 is based on sda6 (major:minor 8:6) and sdb6 (8:22). md2 is partition 9:2. The VG on top of md2 has 4 LVs, var, home, usr, tmp. First the problem: While booting it seems as if the device mapper takes the wrong partition for the mapping! Immediately after boot the information is like ~# dmsetup table systemlvm-home: 0 4194304 linear 8:22 384 systemlvm-home: 4194304 16777216 linear 8:22 69206400 systemlvm-home: 20971520 8388608 linear 8:22 119538048 systemlvm-home: 29360128 6291456 linear 8:22 243270016 systemlvm-tmp: 0 2097152 linear 8:22 41943424 systemlvm-usr: 0 10485760 linear 8:22 20971904 systemlvm-var: 0 10485760 linear 8:22 10486144 systemlvm-var: 10485760 6291456 linear 8:22 4194688 systemlvm-var: 16777216 4194304 linear 8:22 44040576 systemlvm-var: 20971520 10485760 linear 8:22 31457664 systemlvm-var: 31457280 20971520 linear 8:22 48234880 systemlvm-var: 52428800 33554432 linear 8:22 85983616 systemlvm-var: 85983232 115343360 linear 8:22 127926656 ~# cat /proc/mdstat Personalities : [raid1] md2 : active (auto-read-only) raid1 sda6[0] 151798080 blocks [2/1] [U_] md0 : active raid1 sda1[0] sdb1[1] 96256 blocks [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 2931776 blocks [2/2] [UU] I have to manually "lvchange -an" all LVs, add /dev/sdb6 back to the raid and reactivate the LVs, then all is fine. But it prevents me from automounting the partitions and obviously leads to a bunch of other problems. If everything works fine, the information is like ~$ cat /proc/mdstat Personalities : [raid1] md2 : active raid1 sdb6[1] sda6[0] 151798080 blocks [2/2] [UU] ... ~# dmsetup table systemlvm-home: 0 4194304 linear 9:2 384 systemlvm-home: 4194304 16777216 linear 9:2 69206400 systemlvm-home: 20971520 8388608 linear 9:2 119538048 systemlvm-home: 29360128 6291456 linear 9:2 243270016 systemlvm-tmp: 0 2097152 linear 9:2 41943424 systemlvm-usr: 0 10485760 linear 9:2 20971904 systemlvm-var: 0 10485760 linear 9:2 10486144 systemlvm-var: 10485760 6291456 linear 9:2 4194688 systemlvm-var: 16777216 4194304 linear 9:2 44040576 systemlvm-var: 20971520 10485760 linear 9:2 31457664 systemlvm-var: 31457280 20971520 linear 9:2 48234880 systemlvm-var: 52428800 33554432 linear 9:2 85983616 systemlvm-var: 85983232 115343360 linear 9:2 127926656 I think that LVM for some reason just "takes" /dev/sdb6 which is then missing in the raid. I tried almost all options in the lvm.conf but none seems to work. Below is some more information, like config files. Does anyone have any idea about what is going on here and how to prevent that? If you need any additional information, please let me know Thanks in advance! Dominik The information (off a "repaired" system): ~# cat /etc/debian_version 5.0.4 ~# uname -a Linux kermit 2.6.26-2-686 #1 SMP Wed Feb 10 08:59:21 UTC 2010 i686 GNU/Linux ~# lvm version LVM version: 2.02.39 (2008-06-27) Library version: 1.02.27 (2008-06-25) Driver version: 4.13.0 ~# cat /etc/mdadm/mdadm.conf DEVICE partitions ARRAY /dev/md1 level=raid1 num-devices=2 metadata=00.90 UUID=11e9dc6c:1da99f3f:b3088ca6:c6fe60e9 ARRAY /dev/md0 level=raid1 num-devices=2 metadata=00.90 UUID=92ed1e4b:897361d3:070682b3:3baa4fa1 ARRAY /dev/md2 level=raid1 num-devices=2 metadata=00.90 UUID=601d4642:39dc80d7:96e8bbac:649924ba ~# mount /dev/md1 on / type ext3 (rw,errors=remount-ro) tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) procbususb on /proc/bus/usb type usbfs (rw) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620) /dev/md0 on /boot type ext3 (rw) /dev/mapper/systemlvm-usr on /usr type reiserfs (rw) /dev/mapper/systemlvm-tmp on /tmp type reiserfs (rw) /dev/mapper/systemlvm-home on /home type reiserfs (rw) /dev/mapper/systemlvm-var on /var type reiserfs (rw) ~# grep -v ^$ /etc/lvm/lvm.conf | grep -v "#" devices { dir = "/dev" scan = [ "/dev" ] preferred_names = [ ] filter = [ "a|/dev/md.*|", "r/.*/" ] cache_dir = "/etc/lvm/cache" cache_file_prefix = "" write_cache_state = 1 sysfs_scan = 1 md_component_detection = 1 ignore_suspended_devices = 0 } log { verbose = 0 syslog = 1 overwrite = 0 level = 0 indent = 1 command_names = 0 prefix = " " } backup { backup = 1 backup_dir = "/etc/lvm/backup" archive = 1 archive_dir = "/etc/lvm/archive" retain_min = 10 retain_days = 30 } shell { history_size = 100 } global { umask = 077 test = 0 units = "h" activation = 1 proc = "/proc" locking_type = 1 fallback_to_clustered_locking = 1 fallback_to_local_locking = 1 locking_dir = "/lib/init/rw" } activation { missing_stripe_filler = "/dev/ioerror" reserved_stack = 256 reserved_memory = 8192 process_priority = -18 mirror_region_size = 512 readahead = "auto" mirror_log_fault_policy = "allocate" mirror_device_fault_policy = "remove" } :~# vgscan -vvv Processing: vgscan -vvv O_DIRECT will be used Setting global/locking_type to 1 File-based locking selected. Setting global/locking_dir to /lib/init/rw Locking /lib/init/rw/P_global WB Wiping cache of LVM-capable devices /dev/block/1:0: Added to device cache /dev/block/1:1: Added to device cache /dev/block/1:10: Added to device cache /dev/block/1:11: Added to device cache /dev/block/1:12: Added to device cache /dev/block/1:13: Added to device cache /dev/block/1:14: Added to device cache /dev/block/1:15: Added to device cache /dev/block/1:2: Added to device cache /dev/block/1:3: Added to device cache /dev/block/1:4: Added to device cache /dev/block/1:5: Added to device cache /dev/block/1:6: Added to device cache /dev/block/1:7: Added to device cache /dev/block/1:8: Added to device cache /dev/block/1:9: Added to device cache /dev/block/253:0: Added to device cache /dev/block/253:1: Added to device cache /dev/block/253:2: Added to device cache /dev/block/253:3: Added to device cache /dev/block/8:0: Added to device cache /dev/block/8:1: Added to device cache /dev/block/8:16: Added to device cache /dev/block/8:17: Added to device cache /dev/block/8:18: Added to device cache /dev/block/8:19: Added to device cache /dev/block/8:2: Added to device cache /dev/block/8:21: Added to device cache /dev/block/8:22: Added to device cache /dev/block/8:3: Added to device cache /dev/block/8:5: Added to device cache /dev/block/8:6: Added to device cache /dev/block/9:0: Already in device cache /dev/block/9:1: Already in device cache /dev/block/9:2: Already in device cache /dev/bsg/0:0:0:0: Not a block device /dev/bsg/1:0:0:0: Not a block device /dev/bus/usb/001/001: Not a block device [... many more "not a block device"] /dev/core: Not a block device /dev/cpu_dma_latency: Not a block device /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895: Aliased to /dev/block/8:16 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L507895-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800: Aliased to /dev/block/8:0 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-id/ata-SAMSUNG_HD160JJ_S08HJ10L526800-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-id/dm-name-systemlvm-home: Aliased to /dev/block/253:2 in device cache /dev/disk/by-id/dm-name-systemlvm-tmp: Aliased to /dev/block/253:3 in device cache /dev/disk/by-id/dm-name-systemlvm-usr: Aliased to /dev/block/253:1 in device cache /dev/disk/by-id/dm-name-systemlvm-var: Aliased to /dev/block/253:0 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr25N7CRZpUMzR18NfS6zeSeAVnVT98LuU: Aliased to /dev/block/253:0 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr3TpFXtLjYGEwn79IdXsSCZPl8AxmqbmQ: Aliased to /dev/block/253:1 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvrc5MJ4KolevMjt85PPBrQuRTkXbx6NvTi: Aliased to /dev/block/253:3 in device cache /dev/disk/by-id/dm-uuid-LVM-rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvrYXrfdg5OSYDVkNeiQeQksgCI849Z2hx8: Aliased to /dev/block/253:2 in device cache /dev/disk/by-id/md-uuid-11e9dc6c:1da99f3f:b3088ca6:c6fe60e9: Already in device cache /dev/disk/by-id/md-uuid-601d4642:39dc80d7:96e8bbac:649924ba: Already in device cache /dev/disk/by-id/md-uuid-92ed1e4b:897361d3:070682b3:3baa4fa1: Already in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895: Aliased to /dev/block/8:16 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L507895-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800: Aliased to /dev/block/8:0 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-id/scsi-SATA_SAMSUNG_HD160JJS08HJ10L526800-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0: Aliased to /dev/block/8:0 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part1: Aliased to /dev/block/8:1 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part2: Aliased to /dev/block/8:2 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part3: Aliased to /dev/block/8:3 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part5: Aliased to /dev/block/8:5 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-0:0:0:0-part6: Aliased to /dev/block/8:6 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0: Aliased to /dev/block/8:16 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part1: Aliased to /dev/block/8:17 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part2: Aliased to /dev/block/8:18 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part3: Aliased to /dev/block/8:19 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part5: Aliased to /dev/block/8:21 in device cache /dev/disk/by-path/pci-0000:00:0f.0-scsi-1:0:0:0-part6: Aliased to /dev/block/8:22 in device cache /dev/disk/by-uuid/13c1262b-e06f-40ce-b088-ce410640a6dc: Aliased to /dev/block/253:3 in device cache /dev/disk/by-uuid/379f57b0-2e03-414c-808a-f76160617336: Aliased to /dev/block/253:2 in device cache /dev/disk/by-uuid/4fb2d6d3-bd51-48d3-95ee-8e404faf243d: Already in device cache /dev/disk/by-uuid/5c6728ec-82c1-49c0-93c5-f6dbd5c0d659: Aliased to /dev/block/8:5 in device cache /dev/disk/by-uuid/a13cdfcd-2191-4185-a727-ffefaf7a382e: Aliased to /dev/block/253:1 in device cache /dev/disk/by-uuid/e0d5893d-ff88-412f-b753-9e3e9af3242d: Aliased to /dev/block/8:21 in device cache /dev/disk/by-uuid/e79c9da6-8533-4e55-93ec-208876671edc: Aliased to /dev/block/253:0 in device cache /dev/disk/by-uuid/f3f176f5-12f7-4af8-952a-c6ac43a6e332: Already in device cache /dev/dm-0: Aliased to /dev/block/253:0 in device cache (preferred name) /dev/dm-1: Aliased to /dev/block/253:1 in device cache (preferred name) /dev/dm-2: Aliased to /dev/block/253:2 in device cache (preferred name) /dev/dm-3: Aliased to /dev/block/253:3 in device cache (preferred name) /dev/fd: Symbolic link to directory /dev/full: Not a block device /dev/hpet: Not a block device /dev/initctl: Not a block device /dev/input/by-path/platform-i8042-serio-0-event-kbd: Not a block device /dev/input/event0: Not a block device /dev/input/mice: Not a block device /dev/kmem: Not a block device /dev/kmsg: Not a block device /dev/log: Not a block device /dev/loop/0: Added to device cache /dev/MAKEDEV: Not a block device /dev/mapper/control: Not a block device /dev/mapper/systemlvm-home: Aliased to /dev/dm-2 in device cache /dev/mapper/systemlvm-tmp: Aliased to /dev/dm-3 in device cache /dev/mapper/systemlvm-usr: Aliased to /dev/dm-1 in device cache /dev/mapper/systemlvm-var: Aliased to /dev/dm-0 in device cache /dev/md0: Already in device cache /dev/md1: Already in device cache /dev/md2: Already in device cache /dev/mem: Not a block device /dev/net/tun: Not a block device /dev/network_latency: Not a block device /dev/network_throughput: Not a block device /dev/null: Not a block device /dev/port: Not a block device /dev/ppp: Not a block device /dev/psaux: Not a block device /dev/ptmx: Not a block device /dev/pts/0: Not a block device /dev/ram0: Aliased to /dev/block/1:0 in device cache (preferred name) /dev/ram1: Aliased to /dev/block/1:1 in device cache (preferred name) /dev/ram10: Aliased to /dev/block/1:10 in device cache (preferred name) /dev/ram11: Aliased to /dev/block/1:11 in device cache (preferred name) /dev/ram12: Aliased to /dev/block/1:12 in device cache (preferred name) /dev/ram13: Aliased to /dev/block/1:13 in device cache (preferred name) /dev/ram14: Aliased to /dev/block/1:14 in device cache (preferred name) /dev/ram15: Aliased to /dev/block/1:15 in device cache (preferred name) /dev/ram2: Aliased to /dev/block/1:2 in device cache (preferred name) /dev/ram3: Aliased to /dev/block/1:3 in device cache (preferred name) /dev/ram4: Aliased to /dev/block/1:4 in device cache (preferred name) /dev/ram5: Aliased to /dev/block/1:5 in device cache (preferred name) /dev/ram6: Aliased to /dev/block/1:6 in device cache (preferred name) /dev/ram7: Aliased to /dev/block/1:7 in device cache (preferred name) /dev/ram8: Aliased to /dev/block/1:8 in device cache (preferred name) /dev/ram9: Aliased to /dev/block/1:9 in device cache (preferred name) /dev/random: Not a block device /dev/root: Already in device cache /dev/rtc: Not a block device /dev/rtc0: Not a block device /dev/sda: Aliased to /dev/block/8:0 in device cache (preferred name) /dev/sda1: Aliased to /dev/block/8:1 in device cache (preferred name) /dev/sda2: Aliased to /dev/block/8:2 in device cache (preferred name) /dev/sda3: Aliased to /dev/block/8:3 in device cache (preferred name) /dev/sda5: Aliased to /dev/block/8:5 in device cache (preferred name) /dev/sda6: Aliased to /dev/block/8:6 in device cache (preferred name) /dev/sdb: Aliased to /dev/block/8:16 in device cache (preferred name) /dev/sdb1: Aliased to /dev/block/8:17 in device cache (preferred name) /dev/sdb2: Aliased to /dev/block/8:18 in device cache (preferred name) /dev/sdb3: Aliased to /dev/block/8:19 in device cache (preferred name) /dev/sdb5: Aliased to /dev/block/8:21 in device cache (preferred name) /dev/sdb6: Aliased to /dev/block/8:22 in device cache (preferred name) /dev/shm/network/ifstate: Not a block device /dev/snapshot: Not a block device /dev/sndstat: stat failed: Datei oder Verzeichnis nicht gefunden /dev/stderr: Not a block device /dev/stdin: Not a block device /dev/stdout: Not a block device /dev/systemlvm/home: Aliased to /dev/dm-2 in device cache /dev/systemlvm/tmp: Aliased to /dev/dm-3 in device cache /dev/systemlvm/usr: Aliased to /dev/dm-1 in device cache /dev/systemlvm/var: Aliased to /dev/dm-0 in device cache /dev/tty: Not a block device /dev/tty0: Not a block device [... many more "not a block device"] /dev/vcsa6: Not a block device /dev/xconsole: Not a block device /dev/zero: Not a block device Wiping internal VG cache lvmcache: initialised VG #orphans_lvm1 lvmcache: initialised VG #orphans_pool lvmcache: initialised VG #orphans_lvm2 Reading all physical volumes. This may take a while... Finding all volume groups /dev/ram0: Skipping (regex) /dev/loop/0: Skipping (sysfs) /dev/sda: Skipping (regex) Opened /dev/md0 RO /dev/md0: size is 192512 sectors Closed /dev/md0 /dev/md0: size is 192512 sectors Opened /dev/md0 RW O_DIRECT /dev/md0: block size is 1024 bytes Closed /dev/md0 Using /dev/md0 Opened /dev/md0 RW O_DIRECT /dev/md0: block size is 1024 bytes /dev/md0: No label detected Closed /dev/md0 /dev/dm-0: Skipping (regex) /dev/ram1: Skipping (regex) /dev/sda1: Skipping (regex) Opened /dev/md1 RO /dev/md1: size is 5863552 sectors Closed /dev/md1 /dev/md1: size is 5863552 sectors Opened /dev/md1 RW O_DIRECT /dev/md1: block size is 4096 bytes Closed /dev/md1 Using /dev/md1 Opened /dev/md1 RW O_DIRECT /dev/md1: block size is 4096 bytes /dev/md1: No label detected Closed /dev/md1 /dev/dm-1: Skipping (regex) /dev/ram2: Skipping (regex) /dev/sda2: Skipping (regex) Opened /dev/md2 RO /dev/md2: size is 303596160 sectors Closed /dev/md2 /dev/md2: size is 303596160 sectors Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes Closed /dev/md2 Using /dev/md2 Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes /dev/md2: lvm2 label detected lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) /dev/md2: Found metadata at 39936 size 2632 (in area at 2048 size 194560) for systemlvm (rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr) lvmcache: /dev/md2: now in VG systemlvm with 1 mdas lvmcache: /dev/md2: setting systemlvm VGID to rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr lvmcache: /dev/md2: VG systemlvm: Set creation host to rescue. Closed /dev/md2 /dev/dm-2: Skipping (regex) /dev/ram3: Skipping (regex) /dev/sda3: Skipping (regex) /dev/dm-3: Skipping (regex) /dev/ram4: Skipping (regex) /dev/ram5: Skipping (regex) /dev/sda5: Skipping (regex) /dev/ram6: Skipping (regex) /dev/sda6: Skipping (regex) /dev/ram7: Skipping (regex) /dev/ram8: Skipping (regex) /dev/ram9: Skipping (regex) /dev/ram10: Skipping (regex) /dev/ram11: Skipping (regex) /dev/ram12: Skipping (regex) /dev/ram13: Skipping (regex) /dev/ram14: Skipping (regex) /dev/ram15: Skipping (regex) /dev/sdb: Skipping (regex) /dev/sdb1: Skipping (regex) /dev/sdb2: Skipping (regex) /dev/sdb3: Skipping (regex) /dev/sdb5: Skipping (regex) /dev/sdb6: Skipping (regex) Locking /lib/init/rw/V_systemlvm RB Finding volume group "systemlvm" Opened /dev/md2 RW O_DIRECT /dev/md2: block size is 4096 bytes /dev/md2: lvm2 label detected lvmcache: /dev/md2: now in VG #orphans_lvm2 (#orphans_lvm2) with 1 mdas /dev/md2: Found metadata at 39936 size 2632 (in area at 2048 size 194560) for systemlvm (rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr) lvmcache: /dev/md2: now in VG systemlvm with 1 mdas lvmcache: /dev/md2: setting systemlvm VGID to rL8Oq2dA7oeRYeu1orJA7Ufnb1kjOyvr lvmcache: /dev/md2: VG systemlvm: Set creation host to rescue. Using cached label for /dev/md2 Read systemlvm metadata (19) from /dev/md2 at 39936 size 2632 /dev/md2 0: 0 16: home(0:0) /dev/md2 1: 16 24: var(40:0) /dev/md2 2: 40 40: var(0:0) /dev/md2 3: 80 40: usr(0:0) /dev/md2 4: 120 40: var(80:0) /dev/md2 5: 160 8: tmp(0:0) /dev/md2 6: 168 16: var(64:0) /dev/md2 7: 184 80: var(120:0) /dev/md2 8: 264 64: home(16:0) /dev/md2 9: 328 128: var(200:0) /dev/md2 10: 456 32: home(80:0) /dev/md2 11: 488 440: var(328:0) /dev/md2 12: 928 24: home(112:0) /dev/md2 13: 952 206: NULL(0:0) Found volume group "systemlvm" using metadata type lvm2 Read volume group systemlvm from /etc/lvm/backup/systemlvm Unlocking /lib/init/rw/V_systemlvm Closed /dev/md2 Unlocking /lib/init/rw/P_global ~# vgdisplay --- Volume group --- VG Name systemlvm System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 19 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 4 Max PV 0 Cur PV 1 Act PV 1 VG Size 144,75 GB PE Size 128,00 MB Total PE 1158 Alloc PE / Size 952 / 119,00 GB Free PE / Size 206 / 25,75 GB VG UUID rL8Oq2-dA7o-eRYe-u1or-JA7U-fnb1-kjOyvr ~# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name systemlvm PV Size 144,77 GB / not usable 16,31 MB Allocatable yes PE Size (KByte) 131072 Total PE 1158 Free PE 206 Allocated PE 952 PV UUID ZSAzP5-iBvr-L7jy-wB8T-AiWz-0g3m-HLK66Y :~# lvdisplay --- Logical volume --- LV Name /dev/systemlvm/home VG Name systemlvm LV UUID YXrfdg-5OSY-DVkN-eiQe-Qksg-CI84-9Z2hx8 LV Write Access read/write LV Status available # open 2 LV Size 17,00 GB Current LE 136 Segments 4 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Name /dev/systemlvm/var VG Name systemlvm LV UUID 25N7CR-ZpUM-zR18-NfS6-zeSe-AVnV-T98LuU LV Write Access read/write LV Status available # open 2 LV Size 96,00 GB Current LE 768 Segments 7 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Name /dev/systemlvm/usr VG Name systemlvm LV UUID 3TpFXt-LjYG-Ewn7-9IdX-sSCZ-Pl8A-xmqbmQ LV Write Access read/write LV Status available # open 2 LV Size 5,00 GB Current LE 40 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Name /dev/systemlvm/tmp VG Name systemlvm LV UUID c5MJ4K-olev-Mjt8-5PPB-rQuR-TkXb-x6NvTi LV Write Access read/write LV Status available # open 2 LV Size 1,00 GB Current LE 8 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • Adding A Custom Dropdown in RCDC for Forefront Identity Manager 2010

    - by Daniel Lackey
    My latest exploration has been FIM 2010 for Identity Management. The following is a post of how to add a custom dropdown for the FIM Portal. I have decided to document this as I cannot find documentation on how to do this anywhere else. I hope that it finds useful to others.   For starters, this was to me not an easy task to figure out. I really would like to know why it is so cumbersome to do something that seems like a lot of people would need to do, but that’s for another day J   The dropdown I wanted to add was for ‘Account Status’ which would display if the account is ‘Enabled’ or ‘Disabled’ in the data source Active Directory. This option would also allow helpdesk users or admins to administer the userAccountControl attribute in AD from the FIM Portal interface.   The first thing I had to do was create the attribute itself. This is done by going to Administration à Schema Management from the FIM 2010 portal. Once here, you click on All Attributes. What is listed here are all attributes and their associated Resource Types in FIM. To create the ‘AccountStatus’ attribute, click on New. As shown below, enter ‘AccountStatus’ with no spaces for the System Name and ‘Account Status’ for the Display Name. The Data Type is going to be ‘Indexed String’. Click Next.           Leave everything on the Localization tab default and click Next.   On the Validation tab as shown below, we will enter the regex expression ^(Enabled|Disabled)?$ with our two desired string values ‘Enabled’ and ‘Disabled’. Click on Finish and then and Submit to complete adding the attribute.       The next step involves associating the attribute with a resource type. This is called ‘Binding’ the attribute. From the Schema Management page, click on All Bindings. From the page that comes up, click on New. As shown below, enter ‘User’ for the Resource Type and ‘Account Status’ for the Attribute Type. This is essentially binding the Account Status attribute to the ‘User’ Resource Type. Click Next.    On the ‘Attribute Override’ tab, type in ‘Account Status’ for the Display Name field. Click Next.   On the ‘Localization’ tab, click Next.   On the ‘Validation’ tab, enter the regex expression ^(Enabled|Disabled)?$ we entered previously for the attribute. Click Finish and then Submit to complete.   Now that the Attribute and the Binding are complete, you have to give users permission to see the attribute on the User Edit page. Go to Administration à Management Policy Rules. Look for the rule named Administration: Administrators can read and update Users and click on it. Once it opens, click on the ‘Target Resources’ tab and look at the section named Resource Attributes. Type in at the end the ‘Account Status’ attribute and check it with the validator. Once done click on OK to save the changes.         Lastly, we need to add the actual dropdown control to the RCDC (Resource Control Display Configuration) for User Editing. Go to Administration à Resource Control Display Configuration. From here navigate until you find the RCDC named Configuration for User Editing RCDC and click on it. The following is what you will see:       First step is to export the Configuration Data file. Click on the Export configuration link and save the file to your desktop of other folder.   Find the file you just exported and open the file in your XML editor of choice. I use notepad but anything will work. Since we are adding a dropdown control, first find another control in the existing file that is already a dropdown in FIM. I used EmployeeType as my example. Copy the control from the beginning tag named <my:Control… to the ending tag </my:Control>. Now take what you copied and paste it in whatever location you desire within the form between two other controls. I chose to place the ‘Account Status’ field after the ‘Account Name’ field. After you paste the control you will need to modify so it looks like this:       Notice where you specify what attribute you are dealing with where it has AccountStatus in the XML. Once you are complete with modifying this, save the file and make sure it is a .xml file.   Now go back to the Configuration for User Editing screen and look at the section named ‘Configuration Data’. Click the ‘Browse’ button and find the XML file you just modified and choose it. Click OK on the bottom of the window and you are done!   Now when you click on a user’s name in the FIM Portal, you should see the newly added dropdown box as below:       Later I will post more about this drop down, specifically on how to automate actually ‘Disabling’ the account in the data source through the FIM Workflows and MAs.   <my:Control my:Name="AccountStatus" my:TypeName="UocDropDownList" my:Caption="{Binding Source=schema, Path=AccountStatus.DisplayName}" my:Description="{Binding Source=schema, Path=AccountStatus.Description}" my:RightsLevel="{Binding Source=rights, Path=AccountStatus}"> <my:Properties> <my:Property my:Name="ValuePath" my:Value="Value"/> <my:Property my:Name="CaptionPath" my:Value="Caption"/> <my:Property my:Name="HintPath" my:Value="Hint"/> <my:Property my:Name="ItemSource" my:Value="{Binding Source=schema, Path=AccountStatus.LocalizedAllowedValues}"/> <my:Property my:Name="SelectedValue" my:Value="{Binding Source=object, Path=AccountStatus, Mode=TwoWay}"/> </my:Properties> </my:Control>

    Read the article

  • 14+ Real Estate WordPress Themes

    - by Aditi
    If you are looking for a great WordPress real estate theme. Below is a list of some of the best wordpress real estate themes, so you can find one, which is the best suited for you and be at par with increasing industry demands in real estates business.We have covered only the best themes available. The Themes are flexible & can be used by anybody in real estate business. If you are realtor, agent, appraiser or realty these can be modified as per your use. Estate It is an immensely powerful and simple to manage business theme. It offers advanced SEO control, clean code and styling modification features. It has new “Properties” management facility when installed – proving it’s far more than just a WordPress theme. It offers flexible page templates, an advanced search facility that allows you to drill down into properties based on very specific criteria, Google Maps integration and smart property images management. It is a complete web solution. It also has IDX functionality due to dsIDXpress plugin integration, which allows multi-listing services. Price: $200 View Demo Download ElegantEstate It makes your WordPress blog into a full-feature real estate website. The theme makes browsing your listings easy, and adds special integration features for property info, photos, Google Maps and more. Help increase sales by establishing an elegant and professional online presence today. It has opera compatibility, Netscape compatibility, Safari compatibility, WordPress 3.0 compatibility. It comes with five color schemes, threaded comments, optional blog-style structure, Gravatar ready, firefox compatible, IE8 + IE7 + IE6 compatible, advertisement ready, widget ready sidebars, theme options page, custom thumbnail images, PSD files, valid XHTML + CSS, smooth table less design, ePanel theme options, page templates, complete localization and many more features. Price: $39 (Package includes more than 55 themes) View Demo Download Open House Open House is fully compatible with WordPress 3.0+ and a highly customizable Real Estate WordPress theme. It has Google Maps Integration with Street View. It has a professional look for Agents and Realtors both. It is best suited for all markets and countries with theme localization, translation and internationalization. It provides for English, Spanish and Portuguese language files in the Developer Package. It has custom scripts, which makes it easy to add/delete/modify listings. It also includes photo gallery with a lightbox effect, gorgeous photo fade animations and automatic Google Maps integration. The theme can be used as a single or multi-agent website with individual Agent-Realtor pages with listings and biography information, Agent photo uploader, financing calculator.There is Multi Category search for potential customers to locate the house they want. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Residence Real Estate It is a WordPress 3.0+ compatible stunning real estate theme. It has a dynamic real estate framework management module for easy edit-delete-add more features options, which makes this theme super easy to customize to the market needs. It allows you to add your own labels and values in your own language and switch the theme to your own language with English and Spanish files included with the ability to add your own language. It offers Multi-Category search with breadcrumb filtered results, easy photo gallery management with drag-drop sorting of images. It allows you to build your own multi-category search section menu with custom labels-choices and unlimited dropdown menus. They have been presented in a professional module with search results in breadcrumb navigation. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Smooth Smooth is a WordPress Real Estate theme. It is a complete theme, which comes with Multi Category Search, Google Maps Integration, Agent Photo and Logo uploader that offers a professional and extremely affordable solution for Realtors and Agents to showcase their properties with ease. You can add your listings with the extremely easy and flexible Dynamic Real Estate Framework, edit-add-modify-delete all features, labels and values within the WordPress administration and upload unlimited photos to your galleries with latest WordPress 3.0+ features. It is a complete solution for real estate sites. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Homeowners It is another WordPress Real Estate theme, which is a fast loading optimized theme with Google Maps Integration, fully compatible with WordPress 3.0 features and all Real Estate markets. It has a professional clean look and it is full of features extremely easy to modify. It also provides for 12 new styles provided. English, Spanish and Portuguese language files are provided in the Developer Package. Homeowners WordPress Real Estate features custom scripts that make add/delete/modify listings an easy task with an included photo gallery with a lightbox effect and automatic Google Map integration with street view (New) Agents will have access only to their own listings and add the listing management for their account making this theme an ideal affordable solution for Realtors and Real Estate agencies. The theme can be used as a single or multi-agent website with individual Agent-Realtor pages with listings and biography information, Agent photo uploader, financing calculator. Multi category search has also been provided. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Real Agent Real Estate This theme is a WordPress 3.0+ compatible clean grid based real estate theme. It has a dynamic real estate framework management module for easy edit-delete-add more features options. It is easy to customize according to market. It allows you to add your own labels and values in your own language switch the theme to your own language with English and Spanish files included with the ability to add your own language. Multi-Category search with breadcrumb filtered results, easy photo gallery management with drag-drop sorting of images. You can upload property photos in bulk with the native WordPress uploader and the new image editing and resizing options in WordPress 3.0+. The theme features 5 different color styles, blue, black, red, green and purple with professional layouts, logo and agent photo uploaders. This theme is best suited for individual or multiple agents both. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Agent Press The AgentPress theme is an ideal solution for real estate agents. It offers multiple page templates that can be used to create a complete real estate website. You can create from single property templates to a custom homepage easily with it. It is compatible to WordPress 3.0 and 3.1. It has custom background/header, property template, 6 layout options, fixed width, threaded comments and many more features. Price: $99.95 View Demo Download Real Estate It is one of the best Real Estate themes. It offers single click auto install of the site, Allow user to pay & submit properties on your site, Multi-agent site with profiles, Strategically built real estate site with professional design, User dashboard to edit/renew their submissions, Auto generated Google Maps and Image Slideshows and many more unique features. Once the users search property as per their criteria, the properties are listed with all the necessary parameters that let them select the property of their choice. Users can also add the property to favorite so they can check the property later from their member area dashboard. Admin may display different sidebar on this page and add widgets of their choice. This theme is full of custom, dynamic widgets such as top agents, finance calculator, user login; advertise blocks, testimonials and so on. There is a property details page where users can see the actual property. The agent details is displayed with the full contact details and appropriate links so the visitor can get all info about the property being sold, seller and may contact them by filling out a simple form. The email will be sent directly to the person who listed the property. Price: $89.95 Single | $159.95 Developer View Demo Download Broker Real Estate It is also a WordPress 3.0+ compatible real estate theme. It has a featured property slideshow, dynamic real estate framework management module for easy edit-delete-add more features. You can add your own labels and values in your own language. It offers multi-category search with breadcrumb-filtered results, easy photo gallery management with drag-drop sorting of images. You can also build your own multi-category search section menu with custom labels-choices and unlimited dropdown menus. Price: $39.95 essential | $69.95 standard | $99.95 premium View Demo Download Decasa It has custom search panel that lets your user easily browse your properties by keyword search or category select drop downs. It offers the property exposé, which is a user-friendly overview over the most important details of each real estate object. You can easily add this data through a post settings meta box on the post edit screen. You can easily create a real estate image gallery. Its theme options panel makes it easy to make the basic theme settings. It supports the new WordPress post thumbnail feature. When uploading an image file the theme will automatically create all the necessary image size. You can also create your own custom menu easily and fast with drag and drop without touching any code. Price: 39 € View Demo Download RealtorPress A real estate premium WordPress theme from PremiumPress. Versatile WordPress Theme that can be used by individual agents or real estate companies. The theme allows you to easily add property listings via the custom backend admin area or import CSV spreadsheets. It features customisable search options, Google maps integration, real estate data custom field creator, image management tools and more. Price: $79 | Premium Collection: $259 (all PremiumPress themes) View Demo Download Related posts:21+ WordPress Photo Blog & Portfolio Themes 14+ WordPress Portfolio Themes Professional WordPress Business Themes

    Read the article

  • CascadingDropDown jQuery Plugin for ASP.NET MVC

    - by rajbk
    CascadingDropDown is a jQuery plugin that can be used by a select list to get automatic population using AJAX. A sample ASP.NET MVC project is attached at the bottom of this post.   Usage The code below shows two select lists : <select id="customerID" name="customerID"> <option value="ALFKI">Maria Anders</option> <option value="ANATR">Ana Trujillo</option> <option value="ANTON">Antonio Moreno</option> </select>   <select id="orderID" name="orderID"> </select> When a customer is selected in the first select list, the second list will auto populate itself with the following code: $("#orderID").CascadingDropDown("#customerID", '/Sales/AsyncOrders'); Internally, an AJAX post is made to ‘/Sales/AsyncOrders’ with the post body containing  customerID=[selectedCustomerID]. This executes the action AsyncOrders on the SalesController with signature AsyncOrders(string customerID).  The AsyncOrders method returns JSON which is then used to populate the select list. The JSON format expected is shown below : [{ "Text": "John", "Value": "10326" }, { "Text": "Jane", "Value": "10801" }] Details $(targetID).CascadingDropDown(sourceID, url, settings) targetID The ID of the select list that will auto populate.  sourceID The ID of the select list, which, on change, causes the targetID to auto populate. url The url to post to Options promptText Text for the first item in the select list Default : -- Select -- loadingText Optional text to display in the select list while it is being loaded. Default : Loading.. errorText Optional text to display if an error occurs while populating the list Default: Error loading data. postData Data you want posted to the url in place of the default Example : { postData : { customerID : $(‘#custID’), orderID : $(‘#orderID’) }} will cause customerID=ALFKI&orderID=2343 to be sent as the POST body. Default: A text string obtained by calling serialize on the sourceID onLoading (event) Raised before the list is populated. onLoaded (event) Raised after the list is populated, The code below shows how to “animate” the  select list after load. Example using custom options: $("#orderID").CascadingDropDown("#customerID", '/Sales/AsyncOrders', { promptText: '-- Pick an Order--', onLoading: function () { $(this).css("background-color", "#ff3"); }, onLoaded: function () { $(this).animate({ backgroundColor: '#ffffff' }, 300); } }); To return JSON from our action method, we use the Json ActionResult passing in an IEnumerable<SelectListItem>. public ActionResult AsyncOrders(string customerID) { var orders = repository.GetOrders(customerID).ToList().Select(a => new SelectListItem() { Text = a.OrderDate.HasValue ? a.OrderDate.Value.ToString("MM/dd/yyyy") : "[ No Date ]", Value = a.OrderID.ToString(), }); return Json(orders); } Sample Project using VS 2010 RTM NorthwindCascading.zip

    Read the article

  • HSDPA modem only working on certain USB ports

    - by nabulke
    Depending on which USB port I use to connect my HSDPA modem, the network manager will connect to the internet or not. I used to work (i.e. established a internet connection automatically) on all ports, but over time it simply stopped on some ports. lsusb output in all cases looks like that (Device ID varies depending on USB port): Bus 001 Device 009: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Any ideas what could cause this behaviour? What can I do to fix this? ADDED One additional information about the modem: if connected via USB it will be available as as harddrive AND as a HSDPA modem (kind of a duality...). In the error case, it will only be shown as a harddrive. ADDITIONAL INFO AS REQUESTED MODEM NOT WORKING Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:8000 Dell Computer Corp. BC02 Bluetooth Adapter Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 007: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Bus 001 Device 005: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 001 Device 004: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB Bus 001 Device 003: ID 413c:0058 Dell Computer Corp. Port Replicator Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub laptop:~$ dmesg | grep 'usb' [ 0.225371] usbcore: registered new interface driver usbfs [ 0.225387] usbcore: registered new interface driver hub [ 0.225418] usbcore: registered new device driver usb [ 0.504291] usb usb1: configuration #1 chosen from 1 choice [ 0.504767] usb usb2: configuration #1 chosen from 1 choice [ 0.505046] usb usb3: configuration #1 chosen from 1 choice [ 0.505601] usb usb4: configuration #1 chosen from 1 choice [ 1.061064] usb 1-6: new high speed USB device using ehci_hcd and address 3 [ 1.192636] usb 1-6: configuration #1 chosen from 1 choice [ 1.447006] usb 2-2: new full speed USB device using uhci_hcd and address 2 [ 1.634908] usb 2-2: configuration #1 chosen from 1 choice [ 1.708164] usb 1-6.1: new high speed USB device using ehci_hcd and address 4 [ 1.801668] usb 1-6.1: configuration #1 chosen from 1 choice [ 2.076279] usb 1-6.1.1: new low speed USB device using ehci_hcd and address 5 [ 2.174932] usb 1-6.1.1: configuration #1 chosen from 1 choice [ 6.580315] usb 1-6.1.2: new high speed USB device using ehci_hcd and address6 [ 6.683479] usb 1-6.1.2: configuration #1 chosen from 1 choice [ 20.018671] usbcore: registered new interface driver btusb [ 20.131703] usbcore: registered new interface driver usb-storage [ 20.131988] usb-storage: device found at 6 [ 20.131991] usb-storage: waiting for device to settle before scanning [ 20.207981] usb 1-6.1.2: USB disconnect, address 6 [ 20.291499] usbcore: registered new interface driver hiddev [ 20.297052] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6.1/1-6.1.1/1-6.1.1:1.0/input/input6 [ 20.297465] generic-usb 0003:046D:C00C.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:1d.7-6.1.1/input0 [ 20.297534] usbcore: registered new interface driver usbhid [ 20.297803] usbhid: v2.6:USB HID core driver [ 26.552360] usb 1-6.1.2: new high speed USB device using ehci_hcd and address 7 [ 26.663506] usb 1-6.1.2: configuration #1 chosen from 1 choice [ 26.709628] usb-storage: device found at 7 [ 26.709631] usb-storage: waiting for device to settle before scanning [ 26.732387] usb-storage: device found at 7 [ 26.732390] usb-storage: waiting for device to settle before scanning [ 31.709568] usb-storage: device scan complete [ 31.733676] usb-storage: device scan complete MODEM WORKING Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 413c:8000 Dell Computer Corp. BC02 Bluetooth Adapter Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 12d1:1003 Huawei Technologies Co., Ltd. E220 HSDPA Modem / E270 HSDPA/HSUPA Modem Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub dmesg | grep 'usb' [ 0.134811] usbcore: registered new interface driver usbfs [ 0.134826] usbcore: registered new interface driver hub [ 0.134858] usbcore: registered new device driver usb [ 0.360327] usb usb1: configuration #1 chosen from 1 choice [ 0.360783] usb usb2: configuration #1 chosen from 1 choice [ 0.361061] usb usb3: configuration #1 chosen from 1 choice [ 0.361611] usb usb4: configuration #1 chosen from 1 choice [ 1.144122] usb 2-2: new full speed USB device using uhci_hcd and address 2 [ 1.346896] usb 2-2: configuration #1 chosen from 1 choice [ 1.588072] usb 3-1: new low speed USB device using uhci_hcd and address 2 [ 1.761204] usb 3-1: configuration #1 chosen from 1 choice [ 5.972042] usb 1-1: new high speed USB device using ehci_hcd and address 4 [ 6.115438] usb 1-1: configuration #1 chosen from 1 choice [ 19.990565] usbcore: registered new interface driver usbserial [ 19.991429] usb-storage: device found at 4 [ 19.991432] usb-storage: waiting for device to settle before scanning [ 20.017260] usbcore: registered new interface driver usb-storage [ 20.017305] usbcore: registered new interface driver usbserial_generic [ 20.017308] usbserial: USB Serial Driver core [ 20.017817] usb-storage: device found at 4 [ 20.017820] usb-storage: waiting for device to settle before scanning [ 20.070796] usbcore: registered new interface driver btusb [ 20.229525] usb 1-1: GSM modem (1-port) converter now attached to ttyUSB0 [ 20.229776] usb 1-1: GSM modem (1-port) converter now attached to ttyUSB1 [ 20.229843] usbcore: registered new interface driver option [ 20.230396] usbcore: registered new interface driver hiddev [ 20.246280] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input6 [ 20.246438] generic-usb 0003:046D:C00C.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:1d.1-1/input0 [ 20.246479] usbcore: registered new interface driver usbhid [ 20.246483] usbhid: v2.6:USB HID core driver [ 25.436579] usb-storage: device scan complete [ 25.437674] usb-storage: device scan complete

    Read the article

  • Build Explorer version 1.1 for Visual Studio Team Explorer is released

    - by terje
    Our free extension to Visual Studio , the folder based Build Explorer Version 1.1 has now been released, and uploaded to the Visual Studio Gallery and Codeplex. We have collected up a few changes and some bugs, as follows: Changes: Queue Default Builds can now be optionally fully enabled, fully disabled or enabled just for leaf nodes (=disabled for folders).  If you got a large number of builds it was pretty scary to be able to launch all of them with just one click.  However, it is nice to avoid having the dialog box up when you want to just run off a single build.  That’s the reasoning between the 3rd choice here. Auto fill-in of the builds at start up and refresh  This was a request that came up a lot, and which was also irritating to us.  When the Team Project is opened, the Build explorer will start by itself and fill up it’s tree. So you don’t need to click the node anymore. There was also quite a bit of flashing when the tree filled up, this has been reduced to just a single top level fill before it collapses the node. The speed of the buildup of the tree has also been increased. The “All Build Definitions” node is now shown on top of the list Login box appeared in certain cross domain situations. This was a fix for the TF30063 authentication problem we had in the beginning.  Hopefully the new code has that fixed properly so that both the login box and the TF30063 are gone forever.  Our testing so far seems to indicate it works.  If anyone gets a real problem here there are two workarounds: 1) Turn off the auto refresh to reduce the issue. If this doesn’t fix it, then 2) please reinstall the former version (go to the codeplex download site if you don’t have it anymore)  Write a comment to this blog post with a description of what happens, and I will send a temporary fix asap. Bug fixes: The folder name matching was case sensitive, so “Application.CI” and “application.CI” created two different folders.  View all builds not shown for leaf odes, and view builds didn’t work in all cases.  There was some inconsistencies here which have been fixed. Partly fixed:  The context menu to queue a new build for disabled builds should be removed, but that was a difficult one, and is still on the list, but the command will not do anything for a disabled build. Using the Queue Default Builds on a folder, and if it had some disabled builds below an error box appeared and ruined the whole experience. As a result of these fixes there has been introduced some new options, as shown below:   The two first settings, the Separator symbol and the options for how to handle Queuing of default builds are set per Team Project, and is stored in the TFS source control under the BuildProcessTemplates folder, with the name Inmeta.VisualStudio.BuildExplorer.Settings.xml The next two settings need some explanations.  They handle the behavior for the auto update of the build folders.  First, these are stored in the local registry per user, at the key HKEY_CURRENT_USER/Software\Inmeta\BuildExplorer. The first option Use Timed Refresh at Startup, if turned off, you will need to click the node as it is done in Version 1.0.  The second option is a timed value, the time after the Build explorer node is created and until the scanning of the Build folders start.  It is assumed that this is enough, and the tests so far indicates this.  If you have very many builds and you see that the explorer don’t get them all, try to increase this value, and of course, notify me of your case, either here or on the Visual Gallery site.

    Read the article

  • When to do code reviews when doing continuous integration?

    - by SpecialEd
    We are trying to switch to a continuous integration environment but are not sure when to do code reviews. From what I've read of continuous integration, we should be attempting to check in code as often as multiple times a day. I assume, this even means for features that are not yet complete. So the question is, when do we do the code reviews? We can't do it before we check in the code, because that would slow down the process where we will not be able to do daily checkins, let alone multiple checkins per day. Also, if the code we are checking in merely compiles but is not feature complete, doing a code review then is pointless, as most code reviews are best done as the feature is finalized. Does this mean we should do code reviews when a feature is completed, but that unreviewed code will get into the repository?

    Read the article

  • I can't find my backup for the cpanel in whm/cpanel

    - by CompilingCyborg
    The Problem: I have made a complete backup from the cpanel for the whole home folder. I have placed this folder in the default home directory. Thereafter, i tried to restore this file from the WHM, but i couldn't find it. Does anyone know what causes such problems? Additional Details: I am the administrator of the cpanel and i have complete access to the reseller WHM. Check more details below with images: Thanks in advance for your help! any ideas or suggestions are greatly appreciated.

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Announcing Oracle Enterprise Content Management Suite 11g

    - by [email protected]
    Today Oracle announced Oracle Enterprise Content Management Suite 11g. This is a major release for us, and reinforces our three key themes at Oracle: Complete New in this release - Oracle ECM Suite 11g is built on a single, unified repository. Every piece of content - documents, HTML pages, digital assets, scanned images - is stored and accessbile directly from the repository, whether you are working on websites, creating brand logos, processing accounts payable invoices, or running records and retention functions. It makes complete, end-to-end management of content possible, from the point it enters the organization, through its entire lifecycle. Also new in this release, the installation, access, monitoring and administration of Oracle ECM Suite 11g is centralized. As a complete system, organizations can lower the costs of training and usage by having a centralized source of information that is easily administered. As part of this new unified repository release, Oracle has released a benchmarking white paper that shows the extreme performance and scalability of Oracle ECM Suite. When tested on a two node UCM Server running on Sun Oracle DB Machine Half Rack Hardware with an Exadata storage server, Oracle ECM Suite 11g is able to ingest over 178 million documents per day. Open Oracle ECM Suite 11g is built on a service-oriented architecture. All functions are available through standards-based services calls in Web Services or Java. In this release Oracle unveils Open Web Content Management. Open Web Content Management is a revolutionary approach to web content management that decouples the content management process from the process of creating web applications. One piece of this approach is our one-click web content management. With one click, a web application builder can drag content services into their application, enabling their users to also edit content with just one click. Open Web Content Management is also open because it enables Web developers to add Web content management to new and existing JavaServer Pages (JSP), JavaServer Faces (JSF) and Oracle Application Development Framework (ADF) Faces applications Open content distribution - Oracle ECM Suite 11g offers flexible deployment options with a built-in smart cache so organizations can deliver Web sites or Web applications without requiring Oracle ECM Suite as part of the delivery system Integrated Oracle ECM Suite 11g also offers a series of next generation desktop integrations, providing integrations such as: New MS Office integration with menus to access managed content, insert managed links, and compare managed documents using standard MS Office reviewing tools Automatic identity tagging of documents on download - to help users understand which versions they are viewing and prevent duplicate content items in the content repository. New "smart productivity folders" to show a users workflow inbox, saved searches and checked out content directly from Windows Explorer Drag and drop metadata pop-ups Check in and check out for all file formats with any standard WebDAV server As part of Oracle's Enterprise Application Documents initiative, Oracle Content Management 11g also provides certified application integrations with solution templates You can read the press release here. You can see more assets at the launch center here. You can sign up for the announcement webinar and hear more about the new features here. You can read the benchmarking study here.

    Read the article

  • Installing vim7.2 on Solaris Sparc 10 as non-root

    - by Tobbe
    I'm trying to install vim to $HOME/bin by compiling the sources. ./configure --prefix=$home/bin seems to work, but when running make I get: > make Starting make in the src directory. If there are problems, cd to the src directory and run make there cd src && make first gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -I/usr/openwin/include -o objects/buffer.o buffer.c In file included from buffer.c:28: vim.h:41: error: syntax error before ':' token In file included from os_unix.h:29, from vim.h:245, from buffer.c:28: /usr/include/sys/stat.h:251: error: syntax error before "blksize_t" /usr/include/sys/stat.h:255: error: syntax error before '}' token /usr/include/sys/stat.h:309: error: syntax error before "blksize_t" /usr/include/sys/stat.h:310: error: conflicting types for 'st_blocks' /usr/include/sys/stat.h:252: error: previous declaration of 'st_blocks' was here /usr/include/sys/stat.h:313: error: syntax error before '}' token In file included from /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:132, from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/sys/siginfo.h:259: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:292: error: syntax error before '}' token /usr/include/sys/siginfo.h:294: error: syntax error before '}' token /usr/include/sys/siginfo.h:390: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:398: error: conflicting types for '__fault' /usr/include/sys/siginfo.h:267: error: previous declaration of '__fault' was here /usr/include/sys/siginfo.h:404: error: conflicting types for '__file' /usr/include/sys/siginfo.h:273: error: previous declaration of '__file' was here /usr/include/sys/siginfo.h:420: error: conflicting types for '__prof' /usr/include/sys/siginfo.h:287: error: previous declaration of '__prof' was here /usr/include/sys/siginfo.h:424: error: conflicting types for '__rctl' /usr/include/sys/siginfo.h:291: error: previous declaration of '__rctl' was here /usr/include/sys/siginfo.h:426: error: syntax error before '}' token /usr/include/sys/siginfo.h:428: error: syntax error before '}' token /usr/include/sys/siginfo.h:432: error: syntax error before "k_siginfo_t" /usr/include/sys/siginfo.h:437: error: syntax error before '}' token In file included from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:173: error: syntax error before "siginfo_t" In file included from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/signal.h:111: error: syntax error before "siginfo_t" /usr/include/signal.h:113: error: syntax error before "siginfo_t" buffer.c: In function `buflist_new': buffer.c:1502: error: storage size of 'st' isn't known buffer.c: In function `buflist_findname': buffer.c:1989: error: storage size of 'st' isn't known buffer.c: In function `setfname': buffer.c:2578: error: storage size of 'st' isn't known buffer.c: In function `otherfile_buf': buffer.c:2836: error: storage size of 'st' isn't known buffer.c: In function `buf_setino': buffer.c:2874: error: storage size of 'st' isn't known buffer.c: In function `buf_same_ino': buffer.c:2894: error: dereferencing pointer to incomplete type buffer.c:2895: error: dereferencing pointer to incomplete type *** Error code 1 make: Fatal error: Command failed for target `objects/buffer.o' Current working directory /home/xluntor/vim72/src *** Error code 1 make: Fatal error: Command failed for target `first' How do I fix the make errors? Or is there another way to install vim as non-root? Thanks in advance EDIT: I took a look at the google groups link Sarah posted. The "Compiling Vim" page linked from there was for Linux, so the commands doesn't even work on Solars. But it did hint at logging the output of ./configure to a file, so I did that. Here it is: ./configure output removed. New version further down. Does anyone spot anything critical missing? EDIT 2: So I downloaded the vim package from sunfreeware. I couldn't just install it, since I don't have root privileges, but I was able to extract the package file. This was the file structure in it: `-- SMCvim `-- reloc |-- bin |-- doc | `-- vim `-- share |-- man | `-- man1 `-- vim `-- vim72 |-- autoload | `-- xml |-- colors |-- compiler |-- doc |-- ftplugin |-- indent |-- keymap |-- lang |-- macros | |-- hanoi | |-- life | |-- maze | `-- urm |-- plugin |-- print |-- spell |-- syntax |-- tools `-- tutor I moved the three files (vim, vimtutor, xdd) in SMCvim/reloc/bin to $HOME/bin, so now I can finally run $HOME/bin/vim! But where do I put the "share" directory and its content? EDIT 3: It might also be worth noting that there already exists an install of vim on the system, but it is broken. When I try to run it I get: ld.so.1: vim: fatal: libgtk-1.2.so.0: open failed: No such file or directory "which vim" outputs /opt/local/bin/vim EDIT 4: Trying to compile this on Solaris 10. uname -a SunOS ws005-22 5.10 Generic_141414-10 sun4u sparc SUNW,SPARC-Enterprise New ./configure output: ./configure --prefix=$home/bin ac_cv_sizeof_int=8 --enable-rubyinterp configure: loading cache auto/config.cache checking whether make sets $(MAKE)... yes checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... unsupported checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /usr/sfw/bin/ggrep checking for egrep... /usr/sfw/bin/ggrep -E checking for library containing strerror... none required checking for gawk... gawk checking for strip... strip checking for ANSI C header files... yes checking for sys/wait.h that is POSIX.1 compatible... no configure: checking for buggy tools... checking for BeOS... no checking for QNX... no checking for Darwin (Mac OS X)... no checking --with-local-dir argument... Defaulting to /usr/local checking --with-vim-name argument... Defaulting to vim checking --with-ex-name argument... Defaulting to ex checking --with-view-name argument... Defaulting to view checking --with-global-runtime argument... no checking --with-modified-by argument... no checking if character set is EBCDIC... no checking --disable-selinux argument... no checking for is_selinux_enabled in -lselinux... no checking --with-features argument... Defaulting to normal checking --with-compiledby argument... no checking --disable-xsmp argument... no checking --disable-xsmp-interact argument... no checking --enable-mzschemeinterp argument... no checking --enable-perlinterp argument... no checking --enable-pythoninterp argument... no checking --enable-tclinterp argument... no checking --enable-rubyinterp argument... yes checking for ruby... /opt/sfw/bin/ruby checking Ruby version... OK checking Ruby header files... /opt/sfw/lib/ruby/1.6/sparc-solaris2.10 checking --enable-cscope argument... no checking --enable-workshop argument... no checking --disable-netbeans argument... no checking for socket in -lsocket... yes checking for gethostbyname in -lnsl... yes checking whether compiling netbeans integration is possible... no checking --enable-sniff argument... no checking --enable-multibyte argument... no checking --enable-hangulinput argument... no checking --enable-xim argument... defaulting to auto checking --enable-fontset argument... no checking for xmkmf... /usr/openwin/bin/xmkmf checking for X... libraries /usr/openwin/lib, headers /usr/openwin/include checking whether -R must be followed by a space... no checking for gethostbyname... yes checking for connect... yes checking for remove... yes checking for shmat... yes checking for IceConnectionNumber in -lICE... yes checking if X11 header files can be found... yes checking for _XdmcpAuthDoIt in -lXdmcp... no checking for IceOpenConnection in -lICE... yes checking for XpmCreatePixmapFromData in -lXpm... yes checking if X11 header files implicitly declare return values... no checking --enable-gui argument... yes/auto - automatic GUI support checking whether or not to look for GTK... yes checking whether or not to look for GTK+ 2... yes checking whether or not to look for GNOME... no checking whether or not to look for Motif... yes checking whether or not to look for Athena... yes checking whether or not to look for neXtaw... yes checking whether or not to look for Carbon... yes checking --with-gtk-prefix argument... no checking --with-gtk-exec-prefix argument... no checking --disable-gtktest argument... gtk test enabled checking for gtk-config... /opt/local/bin/gtk-config checking for pkg-config... /usr/bin/pkg-config checking for GTK - version = 2.2.0... yes; found version 2.4.9 checking X11/SM/SMlib.h usability... yes checking X11/SM/SMlib.h presence... yes checking for X11/SM/SMlib.h... yes checking X11/xpm.h usability... yes checking X11/xpm.h presence... yes checking for X11/xpm.h... yes checking X11/Sunkeysym.h usability... yes checking X11/Sunkeysym.h presence... yes checking for X11/Sunkeysym.h... yes checking for XIMText in X11/Xlib.h... yes X GUI selected; xim has been enabled checking whether toupper is broken... no checking whether __DATE__ and __TIME__ work... yes checking elf.h usability... yes checking elf.h presence... yes checking for elf.h... yes checking for main in -lelf... yes checking for dirent.h that defines DIR... yes checking for library containing opendir... none required checking for sys/wait.h that defines union wait... no checking stdarg.h usability... yes checking stdarg.h presence... yes checking for stdarg.h... yes checking stdlib.h usability... yes checking stdlib.h presence... yes checking for stdlib.h... yes checking string.h usability... yes checking string.h presence... yes checking for string.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking sys/utsname.h usability... yes checking sys/utsname.h presence... yes checking for sys/utsname.h... yes checking termcap.h usability... yes checking termcap.h presence... yes checking for termcap.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking sgtty.h usability... yes checking sgtty.h presence... yes checking for sgtty.h... yes checking sys/ioctl.h usability... yes checking sys/ioctl.h presence... yes checking for sys/ioctl.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/types.h usability... yes checking sys/types.h presence... yes checking for sys/types.h... yes checking termio.h usability... yes checking termio.h presence... yes checking for termio.h... yes checking iconv.h usability... yes checking iconv.h presence... yes checking for iconv.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking math.h usability... yes checking math.h presence... yes checking for math.h... yes checking unistd.h usability... yes checking unistd.h presence... yes checking for unistd.h... yes checking stropts.h usability... no checking stropts.h presence... yes configure: WARNING: stropts.h: present but cannot be compiled configure: WARNING: stropts.h: check for missing prerequisite headers? configure: WARNING: stropts.h: see the Autoconf documentation configure: WARNING: stropts.h: section "Present But Cannot Be Compiled" configure: WARNING: stropts.h: proceeding with the preprocessor's result configure: WARNING: stropts.h: in the future, the compiler will take precedence checking for stropts.h... yes checking errno.h usability... yes checking errno.h presence... yes checking for errno.h... yes checking sys/resource.h usability... yes checking sys/resource.h presence... yes checking for sys/resource.h... yes checking sys/systeminfo.h usability... yes checking sys/systeminfo.h presence... yes checking for sys/systeminfo.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking sys/stream.h usability... no checking sys/stream.h presence... yes configure: WARNING: sys/stream.h: present but cannot be compiled configure: WARNING: sys/stream.h: check for missing prerequisite headers? configure: WARNING: sys/stream.h: see the Autoconf documentation configure: WARNING: sys/stream.h: section "Present But Cannot Be Compiled" configure: WARNING: sys/stream.h: proceeding with the preprocessor's result configure: WARNING: sys/stream.h: in the future, the compiler will take precedence checking for sys/stream.h... yes checking termios.h usability... yes checking termios.h presence... yes checking for termios.h... yes checking libc.h usability... no checking libc.h presence... no checking for libc.h... no checking sys/statfs.h usability... yes checking sys/statfs.h presence... yes checking for sys/statfs.h... yes checking poll.h usability... yes checking poll.h presence... yes checking for poll.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking pwd.h usability... yes checking pwd.h presence... yes checking for pwd.h... yes checking utime.h usability... yes checking utime.h presence... yes checking for utime.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking libgen.h usability... yes checking libgen.h presence... yes checking for libgen.h... yes checking util/debug.h usability... no checking util/debug.h presence... no checking for util/debug.h... no checking util/msg18n.h usability... no checking util/msg18n.h presence... no checking for util/msg18n.h... no checking frame.h usability... no checking frame.h presence... no checking for frame.h... no checking sys/acl.h usability... yes checking sys/acl.h presence... yes checking for sys/acl.h... yes checking sys/access.h usability... no checking sys/access.h presence... no checking for sys/access.h... no checking sys/sysctl.h usability... no checking sys/sysctl.h presence... no checking for sys/sysctl.h... no checking sys/sysinfo.h usability... yes checking sys/sysinfo.h presence... yes checking for sys/sysinfo.h... yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking wctype.h usability... yes checking wctype.h presence... yes checking for wctype.h... yes checking for sys/ptem.h... no checking for pthread_np.h... no checking strings.h usability... yes checking strings.h presence... yes checking for strings.h... yes checking if strings.h can be included after string.h... yes checking whether gcc needs -traditional... no checking for an ANSI C-conforming const... yes checking for mode_t... yes checking for off_t... yes checking for pid_t... yes checking for size_t... yes checking for uid_t in sys/types.h... yes checking whether time.h and sys/time.h may both be included... yes checking for ino_t... yes checking for dev_t... yes checking for rlim_t... yes checking for stack_t... yes checking whether stack_t has an ss_base field... no checking --with-tlib argument... empty: automatic terminal library selection checking for tgetent in -lncurses... yes checking whether we talk terminfo... yes checking what tgetent() returns for an unknown terminal... zero checking whether termcap.h contains ospeed... yes checking whether termcap.h contains UP, BC and PC... yes checking whether tputs() uses outfuntype... no checking whether sys/select.h and sys/time.h may both be included... yes checking for /dev/ptc... no checking for SVR4 ptys... yes checking for ptyranges... don't know checking default tty permissions/group... can't determine - assume ptys are world accessable world checking return type of signal handlers... void checking for struct sigcontext... no checking getcwd implementation is broken... no checking for bcmp... yes checking for fchdir... yes checking for fchown... yes checking for fseeko... yes checking for fsync... yes checking for ftello... yes checking for getcwd... yes checking for getpseudotty... no checking for getpwnam... yes checking for getpwuid... yes checking for getrlimit... yes checking for gettimeofday... yes checking for getwd... yes checking for lstat... yes checking for memcmp... yes checking for memset... yes checking for nanosleep... no checking for opendir... yes checking for putenv... yes checking for qsort... yes checking for readlink... yes checking for select... yes checking for setenv... yes checking for setpgid... yes checking for setsid... yes checking for sigaltstack... yes checking for sigstack... yes checking for sigset... yes checking for sigsetjmp... yes checking for sigaction... yes checking for sigvec... no checking for strcasecmp... yes checking for strerror... yes checking for strftime... yes checking for stricmp... no checking for strncasecmp... yes checking for strnicmp... no checking for strpbrk... yes checking for strtol... yes checking for tgetent... yes checking for towlower... yes checking for towupper... yes checking for iswupper... yes checking for usleep... yes checking for utime... yes checking for utimes... yes checking for st_blksize... no checking whether stat() ignores a trailing slash... no checking for iconv_open()... yes; with -liconv checking for nl_langinfo(CODESET)... yes checking for strtod in -lm... yes checking for strtod() and other floating point functions... yes checking --disable-acl argument... no checking for acl_get_file in -lposix1e... no checking for acl_get_file in -lacl... no checking for POSIX ACL support... no checking for Solaris ACL support... yes checking for AIX ACL support... no checking --disable-gpm argument... no checking for gpm... no checking --disable-sysmouse argument... no checking for sysmouse... no checking for rename... yes checking for sysctl... not usable checking for sysinfo... not usable checking for sysinfo.mem_unit... no checking for sysconf... yes checking size of int... (cached) 8 checking whether memmove handles overlaps... yes checking for _xpg4_setrunelocale in -lxpg4... no checking how to create tags... ctags -t checking how to run man with a section nr... man -s checking --disable-nls argument... no checking for msgfmt... msgfmt checking for NLS... no "po/Makefile" - disabled checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for dlopen()... yes checking for dlsym()... yes checking setjmp.h usability... yes checking setjmp.h presence... yes checking for setjmp.h... yes checking for GCC 3 or later... yes configure: updating cache auto/config.cache configure: creating auto/config.status config.status: creating auto/config.mk config.status: creating auto/config.h Make: make Starting make in the src directory. If there are problems, cd to the src directory and run make there cd src && make first mkdir objects CC="gcc -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/openwin/include -I/opt/sfw/lib/ruby/1.6/sparc-solaris2.10 " srcdir=. sh ./osdef.sh gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -I/usr/openwin/include -I/opt/sfw/lib/ruby/1.6/sparc-solaris2.10 -o objects/buffer.o buffer.c In file included from os_unix.h:29, from vim.h:245, from buffer.c:28: /usr/include/sys/stat.h:251: error: syntax error before "blksize_t" /usr/include/sys/stat.h:255: error: syntax error before '}' token /usr/include/sys/stat.h:309: error: syntax error before "blksize_t" /usr/include/sys/stat.h:310: error: conflicting types for 'st_blocks' /usr/include/sys/stat.h:252: error: previous declaration of 'st_blocks' was here /usr/include/sys/stat.h:313: error: syntax error before '}' token In file included from /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:132, from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/sys/siginfo.h:259: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:292: error: syntax error before '}' token /usr/include/sys/siginfo.h:294: error: syntax error before '}' token /usr/include/sys/siginfo.h:390: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:398: error: conflicting types for '__fault' /usr/include/sys/siginfo.h:267: error: previous declaration of '__fault' was here /usr/include/sys/siginfo.h:404: error: conflicting types for '__file' /usr/include/sys/siginfo.h:273: error: previous declaration of '__file' was here /usr/include/sys/siginfo.h:420: error: conflicting types for '__prof' /usr/include/sys/siginfo.h:287: error: previous declaration of '__prof' was here /usr/include/sys/siginfo.h:424: error: conflicting types for '__rctl' /usr/include/sys/siginfo.h:291: error: previous declaration of '__rctl' was here /usr/include/sys/siginfo.h:426: error: syntax error before '}' token /usr/include/sys/siginfo.h:428: error: syntax error before '}' token /usr/include/sys/siginfo.h:432: error: syntax error before "k_siginfo_t" /usr/include/sys/siginfo.h:437: error: syntax error before '}' token In file included from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:173: error: syntax error before "siginfo_t" In file included from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/signal.h:111: error: syntax error before "siginfo_t" /usr/include/signal.h:113: error: syntax error before "siginfo_t" buffer.c: In function `buflist_new': buffer.c:1502: error: storage size of 'st' isn't known buffer.c: In function `buflist_findname': buffer.c:1989: error: storage size of 'st' isn't known buffer.c: In function `setfname': buffer.c:2578: error: storage size of 'st' isn't known buffer.c: In function `otherfile_buf': buffer.c:2836: error: storage size of 'st' isn't known buffer.c: In function `buf_setino': buffer.c:2874: error: storage size of 'st' isn't known buffer.c: In function `buf_same_ino': buffer.c:2894: error: dereferencing pointer to incomplete type buffer.c:2895: error: dereferencing pointer to incomplete type *** Error code 1 make: Fatal error: Command failed for target `objects/buffer.o' Current working directory /home/xluntor/vim72/src *** Error code 1 make: Fatal error: Command failed for target `first'

    Read the article

  • SQL SERVER – Understanding XML – Contest Win Joes 2 Pros Combo (USD 198) – Day 5 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Is following XML a well formed XML Document? <?xml version=”1.0″?> <address> <firstname>Pinal</firstname> <lastname>Dave</lastname> <title>Founder</title> <company>SQLAuthority.com</company> </address> a) Yes b) No c) I do not know Query Hints: BIG HINT POST A common observation by people seeing an XML file for the first time is that it looks like just a bunch of data inside a text file. XML files are text-based documents, which makes them easy to read.  All of the data is literally spelled out in the document and relies on a just a few characters (<, >, =) to convey relationships and structure of the data.  XML files can be used by any commonly available text editor, like Notepad. Much like a book’s Table of Contents, your first glance at well-formed XML will tell you the subject matter of the data and its general structure. Hints appearing within the data help you to quickly identify the main theme (similar to book’s subject), its headers (similar to chapter titles or sections of a book), data elements (similar to a book’s characters or chief topics), and so forth. We’ll learn to recognize and use the structural “hints,” which are XML’s markup components (e.g., XML tags, root elements). The XML Raw and Auto modes are great for displaying data as all attributes or all elements – but not both at once. If you want your XML stream to have some of its data shown in attributes and some shown as elements, then you can use the XML Path mode. If you are using an XML Path stream, then by default all values will be shown as elements. However, it is possible to pick one or more elements to be shown with an attribute(s) as well. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 5. SQL Joes 2 Pros Development Series – OpenXML Options SQL Joes 2 Pros Development Series – Preparing XML in Memory SQL Joes 2 Pros Development Series – Shredding XML SQL Joes 2 Pros Development Series – Using Root With Auto XML Mode SQL Joes 2 Pros Development Series – Using Root With Auto XML Mode SQL Joes 2 Pros Development Series – What is XML? SQL Joes 2 Pros Development Series – What is XML? – 2 Next Step: Answer the Quiz in Contact Form in following format Question - Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to install Windows 8 to dual boot with Windows 7/XP?

    - by Gopinath
    Microsoft released Windows 8 beta(customer preview) few days ago and yesterday I had a chance to install it on one of my home computers. My home PC is running on Windows 7 and I would like to install Windows 8 side by side so that I can dual boot. The installation process was pretty simple and with in 40 minutes my PC was up and running with beautiful Windows 8 OS along with Windows 7. In this post I want to share my experience and provide information for you to install Windows 8. 1. Identify a drive  with at least 20 GB of space – Identify one of the drives on your hard disk that can be used to install Windows 8. Delete all the files or preferably quick format it and make sure that it has at least 20 GB of free space. Rename the drive name to Windows 8 so that it will be helpful to identify the destination drive during installation process. 2. Download Windows 8 installer ISO– Go to Microsoft’s website and download Windows 8 ISO file which is approximately 2.5 GB file(32 bit English version). 3. Create Windows 8 bootable USB/DVD – Its advised to launch Windows 8 installer using a bootable USB or DVD for enabling dual boot instead of unzipping the ISO file and launching the setup from Windows 7 OS. Also consider creating bootable USB instead of bootable DVD to save a disc. To create bootable USB/DVD follow these steps Download and install the Windows 7 DVD / USB tool available at microsoftstore.com Launch the utility and follow the onscreen instructions where you would be asked to choose the ISO file(point to file downloaded in step 2) and choose a USB drive or DVD as destination. The onscreen instructions are very simple and you would be able to complete it in 20 minutes time. So now you have Windows 8 installation setup on your USB drive or DVD. 4. Change BIOS settings to boot from USB/DVD – Restart your PC and open BIOS configuration settings key by pressing F2 or  F12 or DELETE key (the key depends on your computer manufacturer). Go to boot sequence options and make sure that USB/DVD is ahead of hard disk in the boot sequence. Save the settings and restart the PC. 5. Install Windows 8 – After the restart you should be straight into Windows 8 installation screen. Follow the onscreen instructions and install Windows 8 on the drive that is identified during step 1. When prompted for product serial key enter NF32V-Q9P3W-7DR7Y-JGWRW-JFCK8. The installer would restart couple of times during the installation process. On the first restart, make sure that you remove USB/DVD. Windows 8 installation process is pretty simple and very quick. The complete process of creating bootable USB and installation should complete in 30 – 40 minutes time.

    Read the article

  • Is Master Data Management CRM's Secret Sauce?

    - by divya.malik
    This was the title of a recent blog entry by our colleagues in EMEA. Having a good master data management system enables organizations to get a unified, accurate and complete understanding of their customers. Gartner Group's John Radcliffe explains why MDM is destined to be at the heart of future CRM and social CRM projects. Experts are predicting big things for master data management (MDM) in the immediate future. While far from being a new kid on the block, its potential benefits at a time when organisations are drowning in data mean that it is in the right place at the right time. "MDM is not 'nice to have'," explains John Radcliffe, research vice president at Gartner. "If tackled in the right way it can provide near term business value that plays into an organisation's new focus on cost efficiencies, risk management and regulatory compliance, while supporting growth and future transformative strategies." The complete article can be found here.

    Read the article

  • Design Pattern for Social Game Mission Mechanics

    - by Furkan ÇALISKAN
    When we want to design a mission sub-system like in the The Ville or Sims Social, what kind of design pattern / idea would fit the best? There may be relation between missions (first do this then this etc...) or not. What do you think sims social or the ville or any other social games is using for this? I'm looking for a best-practise method to contruct a mission framework for tha game. How the well-known game firms do this stuff for their large scale social facebook games? Giving missions to the players and wait players to complete them. when they finished the missions, providing a method to catch this mission complete events considering large user database by not using server-side not so much to prevent high-traffic / resource consumption. how should i design the database and server-client communication to achive this design condidering this trade-off.

    Read the article

  • The Work Order Printing Challenge

    - by celine.beck
    One of the biggest concerns we've heard from maintenance practitioners is the ability to print and batch print work order details along with its accompanying attachments. Indeed, maintenance workers traditionally rely on work order packets to complete their job. A standard work order packet can include a variety of information like equipment documentation, operating instructions, checklists, end-of-task feedback forms and the likes. Now, the problem is that most Asset Lifecycle Management applications do not provide a simple and efficient solution for process printing with document attachments. Work order forms can be easily printed but attachments are usually left out of the printing process. This sounds like a minor problem, but when you are processing high volume of work orders on a regular basis, this inconvenience can result in important inefficiencies. In order to print work order and its related attachments, maintenance personnel need to print the work order details and then go back to the work order and open each individual attachment using the proper authoring application to view and print each document. The printed output is collated into a work order packet. The AutoVue Document Print Service products that were just released in April 2010 aim at helping organizations address the work order printing challenge. Customers and partners can leverage the AutoVue Document Print Services to build a complete printing solution that complements their existing print server solution with AutoVue's document- and platform-agnostic document print services. The idea is to leverage AutoVue's printing services to invoke printing either programmatically or manually directly from within the work order management application, and efficiently process the printing of complete work order packets, including all types of attachments, from office files to more advanced engineering documents like 2D CAD drawings. Oracle partners like MIPRO Consulting, specialists in PeopleSoft implementations, have already expressed interest in the AutoVue Document Print Service products for their ability to offer print services to the PeopleSoft ALM suite, so that customers are able to print packages of documents for maintenance personnel. For more information on the subject, please consult MIPRO Consulting's article entitled Unsung Value: Primavera and AutoVue Integration into PeopleSoft posted on their blog. The blog post entitled Introducing AutoVue Document Print Service provides additional information on how the solution works. We would also love to hear what your thoughts are on the topic, so please do not hesitate to post your comments/feedback on our blog. Related Articles: Introducing AutoVue Document Print Service Print Any Document Type with AutoVue Document Print Services

    Read the article

  • When done is not done

    - by Tony Davis
    Most developers and DBAs will know what it’s like to be asked to do "a quick tidy up" on a project that, on closer inspection, turns out to be a barely working prototype: as the cynical programmer says, "when you’re told that a project is 90% done, prepare for the next 90%". It is easy to convince a layperson that an application is complete just by using test data, and sticking to the workflow that the development team has implemented and tested. The application is ‘done’ only in the sense that the anticipated paths through the software features, using known data, are fully supported. Reality often strikes only when testers reveal its strange and erratic behavior in response to behavior from the end user that strays from the "ideal". The problem is this: how do we measure progress, accurately and objectively? Development methods such as Scrum or Kanban, when implemented rigorously, can mitigate these problems for developers, to some extent. They force a team to progress one small, but complete feature at a time, to find out how long it really takes for this feature to be "done done"; in other words done to the point where its performance and scalability is understood, it is tested for all conceivable edge cases and doesn’t break…it is ready for prime time. At that point, the team has a much more realistic idea of how long it will take them to really complete all the remaining features, and so how far away the end is. However, it is when software crosses team boundaries that we feel the limitations of such techniques. No matter how well drilled the development team is, problems will still arise if they don’t deploy frequently to a production environment. If they work feverishly for months on end before finally tossing the finished piece of software over the fence for the DBA to deploy to the "real world" then once again will dawn the realization that "done done" is still out of reach, as the DBA uncovers poorly code transactions, un-scalable queries, inefficient caching, and so on. By deploying regularly, end users will also have a much earlier opportunity to tell you how far what you implemented strayed from what they wanted. If you have a tale to tell, anonymized of course, of a "quick polish" project that turned out to be anything but, and what the major problems were, please do share it. Cheers, Tony.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >