Search Results

Search found 17039 results on 682 pages for 'empty row'.

Page 384/682 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • Formating Columns in Excel created by af:exportCollectionActionListener

    - by Duncan Mills
    The af:exportCollectionActionListener behavior in ADF Faces Rich client provides a very simple way of quickly dumping out the contents or selected rows in a table or treeTable to Excel. However, that simplicity comes at a price as it pretty much left up to Excel how to format the data. A common use case where you have a problem is that of ID columns which are often long numerics. You probably want to represent this data as a string, Excel however will probably have other ideas and render it as an exponent  - not what you intended. In earlier releases of the framework you could sort of work around this by taking advantage of a bug which would allow you to surround the outputText in question with invisible outputText components which provided formatting hints to Excel. Something like this: <af:column headertext="Some wide label">  <af:panelgrouplayout layout="horizontal">     <af:outputtext value="=TEXT(" visible="false">     <af:outputtext value="#{row.bigNumberValue}" rendered="true"/>    <af:outputtext value=",0)" visible="false">   </af:panelgrouplayout> </af:column> However, this bug was fixed and so it can no longer be used as a trick, the export now ignores invisible columns. So, if you really need control over the formatting there are several alternatives: First the more powerful ADF Desktop Integration (ADFdi) package which allows you to build fully transactional spreadsheets that "pull" the data and can update it. This gives you all the control that might need on formatting but it does need specific Excel Add-ins on the client to work. For more information about ADFdi have a look at this tutorial on OTN. Or you can of course look at BI Publisher or Apache POI if you're happy with output only spreadsheets

    Read the article

  • Move Firefox’s Tab Bar to the Top

    - by Asian Angel
    Would you prefer to have Firefox’s Tab Bar located at the top of the browser instead of its’ default location? See how easy it is to move the Tab Bar back and forth between the top and current positions “flip switch style” with the Tabs On Top extension. Note: Tabs On Top extension supports multi-row feature in TabMixPlus. Before You can see the “Tab Bar” in its’ default location here in our test browser…not bad but what if you prefer having it located at the top of the browser? After As soon as you have installed the extension and restarted Firefox the “Tab Bar” will have automatically moved to the top of the browser. You will most likely notice a slight decrease in tab height as well (which occurred during our tests). To move the “Tab Bar” back and forth between the top and default locations just select/deselect “Tab Bar on top” in the “Toolbars Context Menu”. You can quickly reduce the size of the upper UI after hiding some of the other toolbars and go even further if you like using extensions that will hide the “Title Bar”. This is definitely a good UI matching extension for anyone using a Chrome based theme in Firefox. Conclusion If you are unhappy with default location for Firefox’s “Tab Bar” then this extension will certainly provide an alternative option for you. Links Download the Tabs On Top extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Use the Keyboard to Move Items Up or Down in Microsoft WordAdd Copy To / Move To on Windows 7 or Vista Right-Click MenuBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)Moving Your Personal Data Folders in Windows Vista the Easy WayAdd Copy To / Move To to the Windows Explorer Right Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   beginctx_ddl.optimize_index('FT_IDCTEXT1','FULL', parallel_degree =>'1');end; When I checked my own test instance, I found my collection had a row fragmentation of about 80% After running the optimization procedure, it went down to 0% The knowledgebase article On Index Fragmentation and Optimization When Using OracleTextSearch or DATABASE.FULLTEXT [ID 1087777.1] goes into detail on how to check your current index fragmentation, how to run the procedure, and then how to schedule the procedure to run automatically.  While the article mentions scheduling the job weekly, Peter says he now is recommending this be run daily, especially on more active systems. And just as a reminder, be sure to involve your DBA with your WebCenter Content implementation as you go to production and over time.  We recently had a customer complain of slow performance of the application when it was discovered the database was starving for memory.  So it's always helpful to keep a watchful eye on your database.

    Read the article

  • Sudo and startup script

    - by Pitto
    Hello my friends. I have a new asus 1215n and I need to digit commands to enable multitouch. No problem: I've made a script. Since this netbook also need manual activation of the wifi driver the complete script is: #!/bin/bash # # list of synaptics device properties http://www.x.org/archive/X11R7.5/doc/man/man4/synaptics.4.html#sect4 # list current synaptics device properties: xinput list-props '"SynPS/2 Synaptics TouchPad"' # sleep 5 #added delay... xinput set-int-prop "SynPS/2 Synaptics TouchPad" "Device Enabled" 8 1 xinput --set-prop --type=int --format=32 "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Pressure" 4 xinput --set-prop --type=int --format=32 "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Width" 9 # Below width 1 finger touch, above width simulate 2 finger touch. - value=pad-pixels xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Edge Scrolling" 1 1 0 # vertical, horizontal, corner - values: 0=disable 1=enable xinput --set-prop --type=int --format=32 "SynPS/2 Synaptics TouchPad" "Synaptics Jumpy Cursor Threshold" 250 # stabilize 2 finger actions - value=pad-pixels #xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Tap Action" 0 0 0 0 1 2 3 # pad corners rt rb lt lb tap fingers 1 2 3 (can't simulate more then 2 tap fingers AFAIK) - values: 0=disable 1=left 2=middle 3=right etc. (in FF 8=back 9=forward) xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Two-Finger Scrolling" 1 0 # vertical scrolling, horizontal scrolling - values: 0=disable 1=enable #xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Circular Scrolling" 1 #xinput --set-prop --type=int --format=8 "SynPS/2 Synaptics TouchPad" "Synaptics Circular Scrolling Trigger" 3 sudo modprobe lib80211 sudo insmod /home/pitto/Drivers/broadcom/wl.ko exit I've saved the script, then put it in my home, then chmod +x scriptname and then added it to startup applications. Then I did: sudo visudo and added this row: myusername ALL=(ALL) NOPASSWD: /home/scriptname rebooted and... Multitouch works but wifi not. When I manually launch the script it asks for sudo password so I thought it was because of modprobe and insmod commands and I've added those commands to sudo visudo. Nothing. What am I doing wrong?

    Read the article

  • Using Microsoft benefits to kickstart your own development

    - by douglasscott
    Working for a big company I enjoy all the Microsoft tools I can consume. I also have the infrastructure to support my development and team communication.I recently helped form a small consulting team that requires the same type of resources. That is when the realization of the true cost of Microsoft's professional development tools really hit me.Okay, I'll just bite the bullet and get what I'm used to working with to do high quality development projects.  After just a few minutes of looking at street prices and doing some quick math I began to have a realization...doing this right isn't cheap!Luckily there is help.  If you are willing to get your ducks in a row and do a little documentation  Microsoft will give you some developer manna. I went to the Bizspark site and completed the application which describes your company profile and services offer.  The approval process took about a week.  Voila, A Visual Studio Ultimate with MSDN Subscription!As a start-up Office 365 can be a great solution for all your team communications.  I also enrolled in the Microsoft Cloud Essentials program as part of a business track.  Once you meet the Cloud Essentials requirements you will receive 250 Office 365 licenses! This includes Office and hosted Exchange, Lync, and SharePoint.Take advantage of what Microsoft has to offer for your start-up.  It just may surprise you and save you a lot of your start-up budget.

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • A Look at the GridView's New Sorting Styles in ASP.NET 4.0

    Like every Web control in the ASP.NET toolbox, the GridView includes a variety of style-related properties, including CssClass, Font, ForeColor, BackColor, Width, Height, and so on. The GridView also includes style properties that apply to certain classes of rows in the grid, such as RowStyle, AlternatingRowStyle, HeaderStyle, and PagerStyle. Each of these meta-style properties offer the standard style properties (CssClass, Font, etc.) as subproperties. In ASP.NET 4.0, Microsoft added four new style properties to the GridView control: SortedAscendingHeaderStyle, SortedAscendingCellStyle, SortedDescendingHeaderStyle, and SortedDescendingCellStyle. These four properties are meta-style properties like RowStyle and HeaderStyle, but apply to column of cells rather than a row. These properties only apply when the GridView is sorted - if the grid's data is sorted in ascending order then the SortedAscendingHeaderStyle and SortedAscendingCellStyle properties define the styles for the column the data is sorted by. The SortedDescendingHeaderStyle and SortedDescendingCellStyle properties apply to the sorted column when the results are sorted in descending order. These four new properties make it easier to customize the appearance of the column by which the data is sorted. Using these properties along with a touch of Cascading Style Sheets (CSS) it is possible to add up and down arrows to the sorted column's header to indicate whether the data is sorted in ascending or descending order. Likewise, these properties can be used to shade the sorted column or make its text bold. This article shows how to use these four new properties to style the sorted column. Read on to learn more! Read More >

    Read the article

  • Watermark TextBox for Windows Phone

    - by Daniel Moth
    In my Translator by Moth app, in the textbox where the user enters a translation I show a "prompt" for the user that goes away when they tap on it to enter text (and returns if the textbox remains/becomes empty). See screenshot on the right (or download the free app to experience it). Back in June 2006 I had shown how to achieve this for Windows Vista (TextBox prompt), and a month later implemented a pure managed version for both desktop and Windows Mobile: TextBox with Cue Banner. So when I encountered the same need for my WP7 app, the path of least resistance for me was to convert my existing code to work for the phone. Usage: Instead of TextBox, in your xaml use TextBoxWithPrompt. Set the TextPrompt property to the text that you want the user to be prompted with. Use the MyText property to get/set the actual entered text (never use the Text property). Optionally, via properties change the default centered alignment and italic font, for the prompt text. It is that simple! You can grab my class here: TextBoxWithPrompt.cs Note, that there are many alternative (probably better) xaml-based solutions, so search around for those. Like I said, since I had solved this before, it was easier for my scenario to re-use my implementation – this does not represent best practice :-) Comments about this post welcome at the original blog.

    Read the article

  • How to negotiate with software vendors who do not follow HL7 standards

    - by Peter Turner
    Take, for instance the "", I'd hope that anyone who has spent any time in dealing with HL7 messages knows that the "" signifies that something should be deleted. "" is not an empty string, it's not a filler etc... But occasionally, one may meet a vendor who persists in sending "" instead of just sending nothing at all. Since, I work for a small business and have an extremely flexible HL7 interface, I can ignore ""'s in received messages. But these things are adding up. Some vendors like to send custom formatted fields with psuedo-components that they leave others to interpret themselves. Some vendors send all their information in note segments and assume you're going to only show users the information they send in a monospace font. Some vendors even have the audacity to send Carriage Return Line Feeds at the end of each line of a file interface. Some vendors absolutely refuse to send decimal numbers and in-so-doing refuse to send any numbers. So, with all this crippling humanity against the simple plastic software man, how does one bend without breaking*? Or better yet, how does one fight back and still make money? *my answer is usually to create an interface for the interface and keep the HL7 processing pure, but I don't think this is the best solution

    Read the article

  • Storing Entity Framework Entities in a Separate Assembly

    - by Anthony Trudeau
    The Entity Framework has been valuable to me since it came out, because it provided a convenient and powerful way to model against my data source in a consistent way.  The first versions had some deficiencies that for me mostly fell in the category of the tight coupling between the model and its resulting object classes (entities). Version 4 of the Entity Framework pretty much solves this with the support of T4 templates that allow you to implement your entities as self-tracking entities, plain old CLR objects (POCO), et al.  Doing this involves either specifying a new code generation template or implementing them yourselves.  Visual Studio 2010 ships with a self-tracking entities template and a POCO template is available from the Extension Manager.  (Extension Manager is very nice but it's very easy to waste a bunch of time exploring add-ins.  You've been warned.) In a current project I wanted to use POCO; however, I didn't want my entities in the same assembly as the context classes.  It would be nice if this was automatic, but since it isn't here are the simple steps to move them.  These steps detail moving the entity classes and not the context.  The context can be moved in the same way, but I don't see a compelling reason to physically separate the context from my model. Turn off code generation for the template.  To do this set the Custom Tool property for the entity template file to an empty string (the entity template file will be named something like MyModel.tt). Expand the tree for the entity template file and delete all of its items.  These are the items that were automatically generated when you added the template. Create a project for your entities (if you haven't already). Add an existing item and browse to your entity template file, but add it as a link (do not add it directly).  Adding it as a link will allow the model and the template to stay in sync, but the code generation will occur in the new assembly.

    Read the article

  • Logparser and Powershell

    - by Michel Klomp
    Logparser in powershell One of the few examples how to use logparser in powershell is from the Microsoft.com Operations blog. This script is a good base to create more advanced logparser scripts: $myQuery = new-object -com MSUtil.LogQuery $szQuery = “Select top 10 * from r:\ex07011210.log”; $recordSet = $myQuery.Execute($szQuery) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {             $record=$recordSet.getRecord();             write-host ($record.GetValue(0) + “,”+ $record.GetValue(1)); } $recordSet.Close(); Logparser input formats The previous example uses the default logparser object, you can extent this with the logparser input formats. with this formats get information from the event-log, different types of logfiles, the Active Directory, the registry and XML files. Here are the different ProgId’s you can use. Input Format ProgId ADS MSUtil.LogQuery.ADSInputFormat BIN MSUtil.LogQuery.IISBINInputFormat CSV MSUtil.LogQuery.CSVInputFormat ETW MSUtil.LogQuery.ETWInputFormat EVT MSUtil.LogQuery.EventLogInputFormat FS MSUtil.LogQuery.FileSystemInputFormat HTTPERR MSUtil.LogQuery.HttpErrorInputFormat IIS MSUtil.LogQuery.IISIISInputFormat IISODBC MSUtil.LogQuery.IISODBCInputFormat IISW3C MSUtil.LogQuery.IISW3CInputFormat NCSA MSUtil.LogQuery.IISNCSAInputFormat NETMON MSUtil.LogQuery.NetMonInputFormat REG MSUtil.LogQuery.RegistryInputFormat TEXTLINE MSUtil.LogQuery.TextLineInputFormat TEXTWORD MSUtil.LogQuery.TextWordInputFormat TSV MSUtil.LogQuery.TSVInputFormat URLSCAN MSUtil.LogQuery.URLScanLogInputFormat W3C MSUtil.LogQuery.W3CInputFormat XML MSUtil.LogQuery.XMLInputFormat Using logparser to parse IIS logs if you use the IISW3CinputFormat you can use the field names instead of de row number to get the information from an IIS logfile, it also skips the comment rows in the logfile. $ObjLogparser = new-object -com MSUtil.LogQuery $objInputFormat = new-object -com MSUtil.LogQuery.IISW3CInputFormat $Query = “Select top 10 * from c:\temp\hb\ex071002.log”; $recordSet = $ObjLogparser.Execute($Query, $objInputFormat) for(; !$recordSet.atEnd(); $recordSet.moveNext()) {     $record=$recordSet.getRecord();     write-host ($record.GetValue(“s-ip”) + “,”+ $record.GetValue(“cs-uri-query”)); } $recordSet.Close();

    Read the article

  • DIVIDE vs division operator in #dax

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting article about DIVIDE performance in DAX. This new function has been introduced in SQL Server Analysis Services 2012 SP1, so it is available also in Excel 2013 (which still doesn’t have other features/fixes introduced by following Cumulative Updates…). The idea that instead of writing: IF ( Sales[Quantity] <> 0, Sales[Amount] / Sales[Quantity], BLANK () ) you can write: DIVIDE ( Sales[Amount], Sales[Quantity] ) There is a third optional argument in DIVIDE that defines the result in case the denominator (second argument) is zero, and by default its value is BLANK, so I omitted the third argument in my example. Using DIVIDE is very important, especially when you use a measure in MDX (for example in an Excel PivotTable) because it raise the chance that the non empty evaluation for the result is evaluated in bulk mode instead of cell-by-cell. However, from a DAX point of view, you might find it’s better to use the standard division operator removing the IF statement. I suggest you to read Alberto’s article, because you will find that an expression applying a filter using FILTER is faster than using CALCULATE, which is against any rule of thumb you might have read until now! Again, this is not always true, and depends on many conditions – trying to simplify, we might say that for a simple calculation, the query plan generated by FILTER could be more efficient – but, as usual, it depends, and 90% of the times using FILTER instead of CALCULATE produces slower performance. Do not take anything for granted, and always check the query plan when performance are your first issue!

    Read the article

  • A Simple Approach For Presenting With Code Samples

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/07/31/a-simple-approach-for-presenting-with-code-samples.aspxI’ve been getting ready for a presentation and have been struggling a bit with the best way to show and execute code samples. I don’t present often (hardly ever), but when I do I like the presentation to have a lot of succinct and executable code snippets to help illustrate the points that I’m making. Depending on what the presentation is about, I might just want to build an entire sample application that I would run during the presentation. In other cases, however, building a full-blown application might not really be the best way to present the code. The presentation I’m working on now is for an open source utility library for dealing with dates and times. I could have probably cooked up a sample app for accepting date and time input and then contrived ways in which it could put the library through its paces, but I had trouble coming up with one app that would illustrate all of the various features of the library that I wanted to highlight. I finally decided that what I really needed was an approach that met the following criteria: Simple: I didn’t want the user interface or overall architecture of a sample application to serve as a distraction from the demonstration of the syntax of the library that the presentation is about. I want to be able to present small bits of code that are focused on accomplishing a single task. Several of these examples will look similar, and that’s OK. I want each sample to “stand on its own” and not rely much on external classes or methods (other than the library that is being presented, of course). “Debuggable” (not really a word, I know): I want to be able to easily run the sample with the debugger attached in Visual Studio should I want to step through any bits of code and show what certain values might be at run time. As far as I know this rules out something like LinqPad, though using LinqPad to present code samples like this is actually a very interesting idea that I might explore another time. Flexible and Selectable: I’m going to have lots of code samples to show, and I want to be able to just package them all up into a single project or module and have an easy way to just run the sample that I want on-demand. Since I’m presenting on a .NET framework library, one of the simplest ways in which I could execute some code samples would be to just create a Console application and use Console.WriteLine to output the pertinent info at run time. This gives me a “no frills” harness from which to run my code samples, and I just hit ‘F5’ to run it with the debugger. This satisfies numbers 1 and 2 from my list of criteria above, but item 3 is a little harder. By default, just running a console application is going to execute the ‘main’ method, and then terminate the program after all code is executed. If I want to have several different code samples and run them one at a time, it would be cumbersome to keep swapping the code I want in and out of the ‘main’ method of the console application. What I really want is an easy way to keep the console app running throughout the whole presentation and just have it run the samples I want when I want. I could setup a simple Windows Forms or WPF desktop application with buttons for the different samples, but then I’m getting away from my first criteria of keeping things as simple as possible. Infinite Loops To The Rescue I found a way to have a simple console application satisfy all three of my requirements above, and it involves using an infinite loop and some Console.ReadLine calls that will give the user an opportunity to break out and exit the program. (All programs that need to run until they are closed explicitly (or crash!) likely use similar constructs behind the scenes. Create a new Windows Forms project, look in the ‘Program.cs’ that gets generated, and then check out the docs for the Application.Run method that it calls.). Here’s how the main method might look: 1: static void Main(string[] args) 2: { 3: do 4: { 5: Console.Write("Enter command or 'exit' to quit: > "); 6: var command = Console.ReadLine(); 7: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 8: { 9: Console.WriteLine("Quitting."); 10: break; 11: } 12: 13: } while (true); 14: } The idea here is the app prompts me for the command I want to run, or I can type in ‘exit’ to break out of the loop and let the application close. The only trick now is to create a set of commands that map to each of the code samples that I’m going to want to run. Each sample is already encapsulated in a single public method in a separate class, so I could just write a big switch statement or create a hashtable/dictionary that maps command text to an Action that will invoke the proper method, but why re-invent the wheel? CLAP For Your Own Presentation I’ve blogged about the CLAP library before, and it turns out that it’s a great fit for satisfying criteria #3 from my list above. CLAP lets you decorate methods in a class with an attribute and then easily invoke those methods from within a console application. CLAP was designed to take the arguments passed into the console app from the command line and parse them to determine which method to run and what arguments to pass to that method, but there’s no reason you can’t re-purpose it to accept command input from within the infinite loop defined above and invoke the corresponding method. Here’s how you might define a couple of different methods to contain two different code samples that you want to run during your presentation: 1: public static class CodeSamples 2: { 3: [Verb(Aliases="one")] 4: public static void SampleOne() 5: { 6: Console.WriteLine("This is sample 1"); 7: } 8:   9: [Verb(Aliases="two")] 10: public static void SampleTwo() 11: { 12: Console.WriteLine("This is sample 2"); 13: } 14: } A couple of things to note about the sample above: I’m using static methods. You don’t actually need to use static methods with CLAP, but the syntax ends up being a bit simpler and static methods happen to lend themselves well to the “one self-contained method per code sample” approach that I want to use. The methods are decorated with a ‘Verb’ attribute. This tells CLAP that they are eligible targets for commands. The “Aliases” argument lets me give them short and easy-to-remember aliases that can be used to invoke them. By default, CLAP just uses the full method name as the command name, but with aliases you can simply the usage a bit. I’m not using any parameters. CLAP’s main feature is its ability to parse out arguments from a command line invocation of a console application and automatically pass them in as parameters to the target methods. My code samples don’t need parameters ,and honestly having them would complicate giving the presentation, so this is a good thing. You could use this same approach to invoke methods with parameters, but you’d have a couple of things to figure out. When you invoke a .NET application from the command line, Windows will parse the arguments and pass them in as a string array (called ‘args’ in the boilerplate console project Program.cs). The parsing that gets done here is smart enough to deal with things like treating strings in double quotes as one argument, and you’d have to re-create that within your infinite loop if you wanted to use parameters. I plan on either submitting a pull request to CLAP to add this capability or maybe just making a small utility class/extension method to do it and posting that here in the future. So I now have a simple class with static methods to contain my code samples, and an infinite loop in my ‘main’ method that can accept text commands. Wiring this all up together is pretty easy: 1: static void Main(string[] args) 2: { 3: do 4: { 5: try 6: { 7: Console.Write("Enter command or 'exit' to quit: > "); 8: var command = Console.ReadLine(); 9: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 10: { 11: Console.WriteLine("Quitting."); 12: break; 13: } 14:   15: Parser.Run<CodeSamples>(new[] { command }); 16: Console.WriteLine("---------------------------------------------------------"); 17: } 18: catch (Exception ex) 19: { 20: Console.Error.WriteLine("Error: " + ex.Message); 21: } 22:   23: } while (true); 24: } Note that I’m now passing the ‘CodeSamples’ class into the CLAP ‘Parser.Run’ as a type argument. This tells CLAP to inspect that class for methods that might be able to handle the commands passed in. I’m also throwing in a little “----“ style line separator and some basic error handling (because I happen to know that some of the samples are going to throw exceptions for demonstration purposes) and I’m good to go. Now during my presentation I can just have the console application running the whole time with the debugger attached and just type in the alias of the code sample method that I want to run when I want to run it.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • WCI Analytics Installation / Configuration Support Webinar

    - by brian.harrison
    Based on the success of the OAM / WCI integration webinar, the second in our series of Technical Support "brown bag" webinars will be delivered on Tuesday, March 30 at 8AM Pacific Daylight Time. Please review the details below, if you would like to attend the webinar, please take a moment to send an email to the address provided for registration and you will be enrolled in the meeting. What are the best practices for installing and configuring Analytics for the WebCenter Interaction (formerly "ALUI") Portal Application? What are some of the most common failures that occur in this implementation and what can be done to correct these common issues? What are the most common reasons for the tables to be "empty" when I try to produce utilization reports? These are just some of the main areas that will be covered in this one hour webinar which will demonstrate the WCI Analytics installation and configuration in action. Our demonstration will focus on areas where Technical Support sees the largest numbers of customer questions become support incidents in an effort to help avoid the need to create an incident to get the implementation working properly in the customer environment. We will demonstrate the most recent version of WCI Analytics (10.3.0.1) for this presentation, but naturally specific issues known to specific versions will be covered as well. Please join us for what we know will be a valuable and relevant learning session. If you would like to attend this session please send an email to [email protected] indicating your interest, and we will respond to you with a meeting invitation including all of the required access information.

    Read the article

  • Why does display:table-cell not center content without display:table?

    - by Samuel
    I'm looking for the most efficient (or elegant) way to vertically and horizontally center content of variable height. I've come up with this: http://cssdeck.com/t/2veysdkg/16, which uses css tables to vertically center the main content. My demands for writing this particular piece of code were: Must be able to center variable and fixed width content vertically and horizontally Centered content must be inside the normal document flow (so no overlapping) Sticky footer and normal header of 100% width As few hacks, ugly code or non-semantic html as possible I didn't care about support for IE6, IE7 (I'll use a different stylesheet for them) The weird thing is that the demands above are only met when the header and footer are set to table-row, and the body-tag to display:table. Which is weird because as I understand it the css will generate anonymous table elements when parent table elements are missing. So table-cell should work without all the surrounding elements, but yet I've not been able to make it work. If it were up to me I would prefer to not mess with the display mode for the body tag, and leave the header and footer on display:block. But I've not been able to make it work. Does anyone understand why this doesn't work?

    Read the article

  • Twin Cities Code Camp 8 Retrospective

    - by Lee Brandt
    I just got back (a few hours ago) from Minneapolis, where I was speaking at the Twin Cities Code Camp 8. I’d never been to a Twin Cities Code Camp, and I have always heard such great things, so I submitted and got accepted to speak. The conference (what I got to see) was great. My talk was pretty short on people, but there are many reasons for that. First, I spoke opposite Donn Felker (speaking about developing for Android) and Keith Dahlby (speaking about Dynamic .NET). So of course, my talk is going to be empty. How could I compete with that? Plus, my talk was about software process improvement, specifically about how our process has evolved. Maybe not the smartest idea to submit to talk about software process at a developer’s conference. The people who DID attend however, seemed to really enjoy the talk. There was good interaction and good, thoughtful questions. So the attendees seemed engaged. I actually did get a chance to go to one session. I went and saw Javier Lozano talk about Open source tools for ASP.NET MVC. I am hip-deep in MVC stuff right now and getting up to speed on MVC 2 as well. I learned about MVC Turbine, Javier’s Open Source project. I will definitely be adding it to my MVC arsenal. Thanks Javier! I did forget my AC adapter for my laptop and got a little lost in Minneapolis on my way to get one from MicroCenter Saturday morning, but other than that, it was a great trip. It’s a long drive, but seeing all the guys and getting two Nut & Honey rolls from Roly Poly in Eden Prarie for lunch on Saturday made the trip totally worth it. I look forward to seeing what Jason & Chris come up with for next year! Thanks for having me guys!

    Read the article

  • Why is my Tiled map distorted when rendered with LibGDX?

    - by Sean
    I have a Tiled map that looks like this in the editor: But when I load it using an AssetManager (full static source available on GitHub) it appears completely askew. I believe the relevant portion of the code is below. This is the entire method; the others are either empty or might as well be. private OrthographicCamera camera; private AssetManager assetManager; private BitmapFont font; private SpriteBatch batch; private TiledMap map; private TiledMapRenderer renderer; @Override public void create() { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); camera = new OrthographicCamera(); assetManager = new AssetManager(); batch = new SpriteBatch(); font = new BitmapFont(); camera.setToOrtho(false, (w / h) * 10, 10); camera.update(); assetManager.setLoader(TiledMap.class, new TmxMapLoader( new InternalFileHandleResolver())); assetManager.load(AssetInfo.ICE_CAVE.assetPath, TiledMap.class); assetManager.finishLoading(); map = assetManager.get(AssetInfo.ICE_CAVE.assetPath); renderer = new IsometricTiledMapRenderer(map, 1f/64f); }

    Read the article

  • June 22-24, 2010 in London City Level 400 SQL Server Performance Monitoring & Tuning Workshop

    - by sqlworkshops
    We are organizing the “3 Day Level 400 SQL Server Performance Monitoring & Tuning Workshop” for the 1st time in London City during June 22-24, 2010.Agenda is located @ www.sqlworkshops.com/workshops & you can register @ www.sqlworkshops.com/ruk. Charges: £ 1800 (5% discount for those who register before 21st May, £ 1710).In this 3 Day Level 400 hands-on workshop, unlike short SQLBits sessions, we go deeper on the tuning topics. Not sure if this will be a good use of your time & money? Watch our webcasts @ www.sqlworkshops.com/webcasts.We are trying to balance these commercial offerings with our free community contributions. Financially: These workshops are essential for us to stay in business!Feedback from Finland workshop posted by Jukka, Wärtsilä Oyj on February 23, 2010 to the LinkedIn SQL Server User Group Finland (more feedbacks @ www.sqlworkshops.com/feedbacks):Just want to start this thread and give some feedback on the Workshop that I attended last week at Microsoft.Three days in a row, deep dive into the query optimization and performance monitoring :-) I must say, that the SQL guru Ramesh has all the tricks up in his sleeves.The workshop was very helpful and what's most important: no slide show marathon: samples after samples explained very clearly and with our own class room SQL servers we can try the same stuff while Ramesh typed his own samples.If the workshop will be rearranged, I can most willingly recommend it to anyone who wants to know what's "under the hood" of SQL Server 2008.Once again, thank you Microsoft and Ramesh to make this happen. May the force be with us all :-)Hope to see you @ the Workshop. Feel free to pass on this information to your SQL Server colleagues.-ramesh-www.sqlbits.com/speakers/r_meyyappan/default.aspx

    Read the article

  • Is there any way to stop a window's title bar merging with the panel when maximised?

    - by Richard Turner
    I'm working on a desktop machine with plenty of screen real-estate, so I don't need my windows' title bars to merge with the global menu bar when the windows are maximised. Moreover, I'm working on a dual-screen set-up, so the fact that a window is maximised doesn't mean that it's the only window visible. Before Unity I'd switch to a maximised window by clicking on its title bar, or close the window, even though it isn't focused, by clicking on its close button; I can no longer do this because the title bar is missing and the global menu bar is empty on that screen. This isn't a huge problem - I can click on some of the window's chrome to focus it - but it's unintuitive and it's forcing me to relearn my mousing behaviour. I'd like to turn-off the merging of title and global menu bars, but how? EDIT: I simply want the title bar of the window NOT to merge with the top panel whenever I maximize a Window. The global menu should stay in the top panel as far as I am concerned. Current it maximizes like this I want it to maximize like this (In that screeny the unmaximized Window has been resized to take rest of the space)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #003

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 This was the first year of my blogging and lots of new things I was learning as I go. I was indeed an infant in blogging a few years ago. However, as time passed by I have learned a lot. This year was year of experiments and new learning. 2007 Working as a full time DBA I often encoutered various errors and I started to learn how to avoid those error and document the same. ERROR Msg 5174 Each file size must be greater than or equal to 512 KB Whenever I see this error I wonder why someone is trying to attempt a database which is extremely small. Anyway, it does not matter what I think I keep on seeing this error often in industries. Anyway the solution of the error is equally interesting – just created larger database. Dilbert Humor This was very first encounter with database humor and I started to love it. It does not matter how many time we read this cartoon it does not get old. Generate Script with Data from Database – Database Publishing Wizard Generating schema script with data is one of the most frequently performed tasks among SQL Server Data Professionals. There are many ways to do the same. In the above article I demonstrated that how we can use the Database Publishing Wizard to accomplish the same. It was new to me at that time but I have not seen much of the adoption of the same still in the industry. Here is one of my videos where I demonstrate how we can generate data with schema. 2008 Delete Backup History – Cleanup Backup History Deleting backup history is important too but should be done carefully. If this is not carried out at regular interval there is good chance that MSDB will be filled up with all the old history. Every organization is different. Some would like to keep the history for 30 days and some for a year but there should be some limit. One should regularly archive the database backup history. South Asia MVP Open Days 2008 This was my very first year Microsoft MVP. I had Indeed big blast at the event and the fun was incredible. After this event I have attended many different MVP events but the fun and learning this particular event presented was amazing and just like me many others are not able to forget the same. Here are other links related to the event: South Asia MVP Open Day 2008 – Goa South Asia MVP Open Day 2008 – Goa – Day 1 South Asia MVP Open Day 2008 – Goa – Day 2 South Asia MVP Open Day 2008 – Goa – Day 3 2009 Enable or Disable Constraint  This is very simple script but I personally keep on forgetting it so I had blogged it. Till today, I keep on referencing this again and again as sometime a very little thing is hard to remember. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL 2008 – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administrative assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. SQLPASS 2009 – My Very First SQPASS Experience Just Brilliant! I never had an experience such a thing in my life. SQL SQL and SQL – all around SQL! I am listing my own reasons here in order of importance to me. Networking with SQL fellows and experts Putting face to the name or avatar Learning and improving my SQL skills Understanding the structure of the largest SQL Server Professional Association Attending my favorite training sessions Since last time I have never missed a single time this event. This event is my favorite event and something keeps me going. Here are additional post related SQLPASS 2009. SQL PASS Summit, Seattle 2009 – Day 1 SQL PASS Summit, Seattle 2009 – Day 2 SQL PASS Summit, Seattle 2009 – Day 3 SQL PASS Summit, Seattle 2009 – Day 4 2010 Get All the Information of Database using sys.databases Even though we believe that we know everything about our database, we do not know a lot of things about our database. This little script enables us to know so many details about databases which we may not be familiar with. Run this on your server today and see how much you know your database. Reducing CXPACKET Wait Stats for High Transactional Database While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. I discusses the same in this article – a bit long but insightful for sure. Error related to Database in Use There are so many database management operations in SQL Server which requires exclusive access to the database and it is not always possible to get it. When any database is online in SQL Server it either applications or system thread often accesses them. This means database can’t have exclusive access and the operations which required this access throws an error. There is very easy method to overcome this minor issue – a single line script can give you exclusive access to the database. Difference between DATETIME and DATETIME2 Developers have found the root reason of the problem when dealing with Date Functions – when data time values are converted (implicit or explicit) between different data types, which would lose some precision, so the result cannot match each other as expected. In this blog post I go over very interesting details and difference between DATETIME and DATETIME2 History of SQL Server Database Encryption I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly gave me permission to re-produce relevant part of history from his book. 2011 Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY Some time an interesting feature and smart audience make a total difference in places. From last two days, I have been writing on SQL Server 2012 feature FIRST_VALUE and LAST_VALUE. I created a puzzle which was very interesting and got many people attempt to resolve it. It was based on following two articles: Introduction to FIRST_VALUE and LAST_VALUE Introduction to FIRST_VALUE and LAST_VALUE with OVER clause I even provided the hint about how one can solve this problem. The best part was many people solved the problem without using hints! Try your luck!  A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available This is a great problem and everybody would love to have it. We had it and we loved it. Our book got out of stock in 48 hours of releasing and stocks were empty. We faced many issues and learned many valuable lessons. Some we were able to avoid in the future and some we are still facing it as those problems have no solutions. However, since that day – our books never gone out of stock. This inspiring learning story for us and I am confident that you will love to read it as well. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This function accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. I had a fantastic time writing this blog post and I am very confident when you read it, you will like the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is 1GB RAM with integrated graphics sufficient for Unity 3D on 12.04?

    - by Anwar Shah
    I have been using Ubuntu since Hardy Heron (8.04). I used Natty, Oneiric with Unity. But When I recently (more than 1 month now) upgraded My Ubuntu to Precise (12.04), the performance of my laptop is not satisfactory. It is too unresponsive compared to older releases. For example, the Unity in 12.04 is very unresponsive. Sometimes, it requires 2 seconds to show up the dash (which was not the case with Natty, though people always saying that Natty's version of Unity is buggiest). I am assuming that, May be my 1GB RAM now becomes too low to run Unity of Precise. But I also think, Since Unity is improved in Precise, It may not be the case. So, I am not sure. Do you have any ideas? Will upgrading RAM fix it? How much I need if upgrade is required? Laptop model: "Lenovo 3000 Y410" Graphic : "Intel GMA X3100" on Intel 965GM Chipset. RAM/Memory : "1 GB DDR2" (1 slot empty). Swap space : 1.1GB Resolution: 1280x800 widescreen Shared RAM for Graphics: 256 MB as below output suggests $ dmesg | grep AGP [ 0.825548] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xd0000000

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >