Search Results

Search found 15515 results on 621 pages for 'datetime format'.

Page 195/621 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • How does one view the "inline docs" of a .cpp file?

    - by Mala
    I have cpp files peppered with comments such as the following before every function: /** * @brief Set the normal and expansion handshake timeouts. * * @param wm Array of wiimote_t structures. * @param wiimotes Number of objects in the wm array. * @param normal_timeout The timeout in milliseconds for a normal read. * @param exp_timeout The timeout in millisecondsd to wait for an expansion handshake. */ I assume from the format that there has to be some way of exporting this into a "friendly" format, perhaps html, which can then be read in a manner similar to the Java API. How would I do this? (I'm on Windows 7, running MS Visual Studio 2010)

    Read the article

  • Why Virtualbox VDI doubles the space of the VM hard disk?

    - by logoff
    I have one Xubuntu 12.10 64 bit Virtualbox VM on a Windows 7 64 bit host. It has one dynamic allocated hard disk with VDI format with maximum capacity of 20GB. If I use a command df -h in the VM I get that 5.3GB are in use in th main partition. I have only 2 partitions, one for the ext4 hard disk and another with 512MB of swap. I have no snapshots. The VDI file of this VM has 10.7GB. It is normal this difference of space? It is caused because the VDI format?

    Read the article

  • Find words from a list within a website

    - by Senoculus
    Summary: I need to find out if any items in my list occur in a website. I could do it manually with ctrl+f, but it would take a long time. Description I have a text file with key words in this format: word1 word4 word12 word24 ... I need to search text in a table on a website (in Internet Explorer) with this format: RandomWord version 1.3 ... word1 version 1.3 ... word2 version 2.6 ... word5 version 1.1 ... randomword version 9.0 ... word12 version 1.0 ... ... ... ... If the above data was what I had, it would be nice to end up with this list: word1 word12

    Read the article

  • SQL SERVER – Concat Strings in SQL Server using T-SQL – SQL in Sixty Seconds #035 – Video

    - by pinaldave
    Concatenating  string is one of the most common tasks in SQL Server and every developer has to come across it. We have to concat the string when we have to see the display full name of the person by first name and last name. In this video we will see various methods to concatenate the strings. SQL Server 2012 has introduced new function CONCAT which concatenates the strings much efficiently. When we concat values with ‘+’ in SQL Server we have to make sure that values are in string format. However, when we attempt to concat integer we have to convert the integers to a string or else it will throw an error. However, with the newly introduce the function of CONCAT in SQL Server 2012 we do not have to worry about this kind of issue. It concatenates strings and integers without casting or converting them. You can specify various values as a parameter to CONCAT functions and it concatenates them together. Let us see how to concat the values in Sixty Seconds: Here is the script which is used in the video. -- Method 1: Concatenating two strings SELECT 'FirstName' + ' ' + 'LastName' AS FullName -- Method 2: Concatenating two Numbers SELECT CAST(1 AS VARCHAR(10)) + ' ' + CAST(2 AS VARCHAR(10)) -- Method 3: Concatenating values of table columns SELECT FirstName + ' ' + LastName AS FullName FROM AdventureWorks2012.Person.Person -- Method 4: SQL Server 2012 CONCAT function SELECT CONCAT('FirstName' , ' ' , 'LastName') AS FullName -- Method 5: SQL Server 2012 CONCAT function SELECT CONCAT('FirstName' , ' ' , 1) AS FullName Related Tips in SQL in Sixty Seconds: SQL SERVER – Concat Function in SQL Server – SQL Concatenation String Function – CONCAT() – A Quick Introduction 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage A Quick Trick about SQL Server 2012 CONCAT Function – PRINT A Quick Trick about SQL Server 2012 CONCAT function What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Comments syntax for Idoc Script

    - by kyle.hatlestad
    Maybe this is widely known and I'm late to the party, but I just ran across the syntax for making comments in Idoc Script. It's been something I've been hoping to see for a long time. And it looks like it quietly snuck into the 10gR3 release. So for comments in Idoc Script, you simply [[% surround your comments in these symbols. %]] They can be on the same line or span multiple lines. If you look in the documentation, it still mentions making comments using the syntax. Well, that's certainly not an ideal approach. You're stuffing your comment into an actual variable, it's taking up memory, and you have to watch double-quotes in your comment. A perhaps better way in the old method is to start with my comments . Still not great, but now you're not assigning something to a variable and worrying about quotes. Unfortunately, this syntax only works in places that use the Idoc format. It can't be used in Idoc files that get indexed (.hcsp & .hcsf) and use the <!--$...--> format. For those, you'll need to continue using the older methods. While on the topic, I thought I would highlight a great plug-in to Notepad++ that Arnoud Koot here at Oracle wrote for Idoc Script. It does script highlighting as well as type-ahead/auto-completion for common variables, functions, and services. For some reason, I can never seem to remember if it's DOC_INFO_LATESTRELEASE or DOC_INFO_LATEST_RELEASE, so this certainly comes in handy. I've updated his plug-in to use this new comments syntax. You can download a copy of the plug-in here which includes installation instructions.

    Read the article

  • Twitter gem - undefined method `stringify_keys’

    - by Piet
    Have you been getting the following errors when running the Twitter gem lately ? /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `send': undefined method `stringify_keys' for # (NoMethodError) from /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `method_missing’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:131:in `deep_update’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:50:in `initialize’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `new’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `fetch’ from test.rb:26 It’s because Twitter has been sending back plain text errors that are treated as a string instead of json and can’t be properly ‘Mashed’ by the Twitter gem. Also check http://github.com/jnunemaker/twitter/issues#issue/6. Without diving into the bowels of the Twitter gem or HTTParty, you could ‘begin…rescue’ this error and try again in 5 minutes. I fixed it by overriding the offending code to return nil and checking for a nil response as follows: module Twitter class Search def fetch(force=false) if @fetch.nil? || force query = @query.dup query[:q] = query[:q].join(' ') query[:format] = 'json' #This line is the hack and whole reason we're monkey-patching at all. response = self.class.get('http://search.twitter.com/search', :query => query, :format => :json) #Our patch: response should be a Hash. If it isnt, return nil. return nil if response.class != Hash @fetch = Mash.new(response) end @fetch end end end (adapted from http://github.com/jnunemaker/twitter/issues#issue/9) If you have a better solution: speak up!

    Read the article

  • 500 Metro Style WP7 Icons

    - by Bil Simser
    I was inspired by The Noun Project, a project that offers up “Metro-style” icons in SVG format. The project is licensed under a public domain license and while it’s a great project, all of the content is in SVG format. Jon Galloway has a great post (from 2007) talking about the differences between SVG and XAML so I highly recommend that for some background. I thought it would be helpful to the WPF/Windows Phone 7/Silverlight community to provide the content in alternative formats for use in your applications. The Goods I’ve put together a package of the 500 icons (502 actually) in PNG, XAML and the original SVG format along with a couple of sample projects so you can see them in action. There’s a WPF desktop app: And a Windows Phone 7 app: Building It To get all the content first I wrote up a quick program to suck the original SVG files. Luckily they’re all in a common path just named 1.SVG, 2.SVG, and so on. Easy sleazy to grab the contents. Once I had 500 SVG files I used the latest copy of XamlTune, an open source CodePlex project that has a command line conversion tool to convert the directory of SVG files into XAML (the tool also created a PNG file of each SVG so that’s just icing on the cake). Conversions The conversion from SVG to XAML isn’t 100%. While you can just drop the content into a WPF app, it doesn’t work that way for WP7. There are just some small adjustments I made to each format so you’ll have to do the same. Follow the information below or refer to the sample applications. As a sample, here’s an icon we want to use: Here’s the original SVG file: <svg version="1.0" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="100px" height="94.616px" viewBox="0 0 100 94.616" enable-background="new 0 0 100 94.616" xml:space="preserve"> <path d="M25.076,15.639c4.324,0.009,7.824-3.488,7.82-7.82C32.9,3.512,29.4,0.012,25.076,0c-4.313,0.012-7.814,3.512-7.821,7.819 C17.262,12.15,20.763,15.648,25.076,15.639L25.076,15.639z"/> <path d="M4.593,43.388h6.861l4.137-15.135h1.716L13.22,43.388h24.318l-4.389-15.135h1.817l2.32,7.415 c1.08,3.131,3.852,3.851,6.003,1.162l8.375-10.142c2.651-3.42-2.104-7.021-4.844-4.035l-4.993,5.952 c0.007,0.095-0.96-3.278-0.96-3.278c-1.135-3.978-4.918-7.903-10.595-7.922H19.576c-5.071,0.019-9.043,4.434-9.888,7.214 L4.593,43.388L4.593,43.388z"/> <polygon points="56.206,22.753 56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753 "/> <path d="M79.87,15.738c4.332-0.014,7.831-3.516,7.82-7.82c0.011-4.332-3.488-7.833-7.82-7.82c-4.306-0.013-7.806,3.488-7.821,7.82 C72.064,12.222,75.564,15.725,79.87,15.738L79.87,15.738z"/> <path d="M89.759,89.556v-43.19h5.751V22.804c0.007-3.079-2.757-5.448-6.71-5.449H70.436c-3.65,0.001-4.539,1.186-5.551,2.168 L49.597,37.889c-3.098,3.848,2.428,8.333,5.55,4.743L69.88,25.226v64.43c-0.019,6.475,9.06,6.686,9.081,0.201v-36.58h1.765v36.379 C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z"/> <polygon points="100,54.035 100,45.155 0,45.155 0,54.035 100,54.035 "/> </svg> Here’s the XAML that XamlTune created. It can be used in any WPF app without any changes: <Canvas Name="Layer_1" Width="100" Height="94.616" ClipToBounds="True" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M25.076,15.639C29.4,15.648 32.9,12.151 32.896,7.819 32.9,3.512 29.4,0.012 25.076,0 20.763,0.012 17.262,3.512 17.255,7.819 17.262,12.15 20.763,15.648 25.076,15.639L25.076,15.639z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M4.593,43.388L11.454,43.388 15.591,28.253 17.307,28.253 13.22,43.388 37.538,43.388 33.149,28.253 34.966,28.253 37.286,35.668C38.366,38.799,41.138,39.519,43.289,36.83L51.664,26.688C54.315,23.268,49.56,19.667,46.82,22.653L41.827,28.605C41.834,28.7 40.867,25.327 40.867,25.327 39.732,21.349 35.949,17.424 30.272,17.405L19.576,17.405C14.505,17.424,10.533,21.839,9.688,24.619L4.593,43.388 4.593,43.388z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M56.206,22.753L56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M79.87,15.738C84.202,15.724 87.701,12.222 87.69,7.918 87.701,3.586 84.202,0.0849999999999991 79.87,0.097999999999999 75.564,0.084999999999999 72.064,3.586 72.049,7.918 72.064,12.222 75.564,15.725 79.87,15.738L79.87,15.738z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M89.759,89.556L89.759,46.366 95.51,46.366 95.51,22.804C95.517,19.725,92.753,17.356,88.8,17.355L70.436,17.355C66.786,17.356,65.897,18.541,64.885,19.523L49.597,37.889C46.499,41.737,52.025,46.222,55.147,42.632L69.88,25.226 69.88,89.656C69.861,96.131,78.94,96.342,78.961,89.857L78.961,53.277 80.726,53.277 80.726,89.656C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z" /> </Path.Data> </Path> <Path Fill="#FF000000"> <Path.Data> <PathGeometry FillRule="Nonzero" Figures="M100,54.035L100,45.155 0,45.155 0,54.035 100,54.035z" /> </Path.Data> </Path> </Canvas> The XAML works AS-IS in a WPF application but there are some changes I did to get it to work in a WP7 app. Here’s the modified XAML in a WP7 application: <Canvas Grid.Row="0" Grid.Column="0" Name="Icon_1" Width="100" Height="94.616"> <Path Fill="#FF000000" Data="M25.076,15.639C29.4,15.648 32.9,12.151 32.896,7.819 32.9,3.512 29.4,0.012 25.076,0 20.763,0.012 17.262,3.512 17.255,7.819 17.262,12.15 20.763,15.648 25.076,15.639L25.076,15.639z"> </Path> <Path Fill="#FF000000" Data="M4.593,43.388L11.454,43.388 15.591,28.253 17.307,28.253 13.22,43.388 37.538,43.388 33.149,28.253 34.966,28.253 37.286,35.668C38.366,38.799,41.138,39.519,43.289,36.83L51.664,26.688C54.315,23.268,49.56,19.667,46.82,22.653L41.827,28.605C41.834,28.7 40.867,25.327 40.867,25.327 39.732,21.349 35.949,17.424 30.272,17.405L19.576,17.405C14.505,17.424,10.533,21.839,9.688,24.619L4.593,43.388 4.593,43.388z"> </Path> <Path Fill="#FF000000" Data="M56.206,22.753L56.206,7.163 49.192,7.163 49.192,22.753 56.206,22.753z"> </Path> <Path Fill="#FF000000" Data="M79.87,15.738C84.202,15.724 87.701,12.222 87.69,7.918 87.701,3.586 84.202,0.0849999999999991 79.87,0.097999999999999 75.564,0.084999999999999 72.064,3.586 72.049,7.918 72.064,12.222 75.564,15.725 79.87,15.738L79.87,15.738z"> </Path> <Path Fill="#FF000000" Data="M89.759,89.556L89.759,46.366 95.51,46.366 95.51,22.804C95.517,19.725,92.753,17.356,88.8,17.355L70.436,17.355C66.786,17.356,65.897,18.541,64.885,19.523L49.597,37.889C46.499,41.737,52.025,46.222,55.147,42.632L69.88,25.226 69.88,89.656C69.861,96.131,78.94,96.342,78.961,89.857L78.961,53.277 80.726,53.277 80.726,89.656C80.748,96.109,89.772,96.13,89.759,89.556L89.759,89.556z"> </Path> <Path Fill="#FF000000" Data="M100,54.035L100,45.155 0,45.155 0,54.035 100,54.035z"> </Path> </Canvas> All I did was take the data portion and put it directly into a Data attribute on the Path. Note that while it does show up in the app (on the emulator or device) it wouldn’t show up in Visual Studio for me. Maybe some XAML guru out there can tell me why. You can just as easily use the PNG files in WP7 but if you want the crispness of vector graphics, go for the XAML version. Of course with XamlTune being open source you could always modify the output of that program to cater it to your app. If you do make a change that’s worthy please consider submitting a patch to the project so everyone can benefit. Hope this helps and happy programming! Resources and Links Sample Project and Icons XamlTune an open source project to convert SVG to XAML The Noun Project source of the original files Jon Galloways post on SVG and XAML StackOverflow question on converting SVG to XAML

    Read the article

  • Visual Studio 2010 Extension Manager (and the new VS 2010 PowerCommands Extension)

    - by ScottGu
    This is the twenty-third in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. Today’s blog post covers some of the extensibility improvements made in VS 2010 – as well as a cool new "PowerCommands for Visual Studio 2010” extension that Microsoft just released (and which can be downloaded and used for free). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Extensibility in VS 2010 VS 2010 provides a much richer extensibility model than previous releases.  Anyone can build extensions that add, customize, and light-up the Visual Studio 2010 IDE, Code Editors, Project System and associated Designers. VS 2010 Extensions can be created using the new MEF (Managed Extensibility Framework) which is built-into .NET 4.  You can learn more about how to create VS 2010 extensions from this this blog post from the Visual Studio Team Blog. VS 2010 Extension Manager Developers building extensions can distribute them on their own (via their own web-sites or by selling them).  Visual Studio 2010 also now includes a built-in “Extension Manager” within the IDE that makes it much easier for developers to find, download, and enable extensions online.  You can launch the “Extension Manager” by selecting the Tools->Extension Manager menu option: This loads an “Extension Manager” dialog which accesses an “online gallery” at Microsoft, and then populates a list of available extensions that you can optionally download and enable within your copy of Visual Studio: There are already hundreds of cool extensions populated within the online gallery.  You can browse them by category (use the tree-view on the top-left to filter them).  Clicking “download” on any of the extensions will download, install, and enable it. PowerCommands for Visual Studio 2010 This weekend Microsoft released the free PowerCommands for Visual Studio 2010 extension to the online gallery.  You can learn more about it here, and download and install it via the “Extension Manager” above (search for PowerCommands to find it). The PowerCommands download adds dozens of useful commands to Visual Studio 2010.  Below is a screen-shot of just a few of the useful commands that it adds to the Solution Explorer context menus: Below is a list of all the commands included with this weekend’s PowerCommands for Visual Studio 2010 release: Enable/Disable PowerCommands in Options dialog This feature allows you to select which commands to enable in the Visual Studio IDE. Point to the Tools menu, then click Options. Expand the PowerCommands options, then click Commands. Check the commands you would like to enable. Note: All power commands are initially defaulted Enabled. Format document on save / Remove and Sort Usings on save The Format document on save option formats the tabs, spaces, and so on of the document being saved. It is equivalent to pointing to the Edit menu, clicking Advanced, and then clicking Format Document. The Remove and sort usings option removes unused using statements and sorts the remaining using statements in the document being saved. Note: The Remove and sort usings option is only available for C# documents. Format document on save and Remove and sort usings both are initially defaulted OFF. Clear All Panes This command clears all output panes. It can be executed from the button on the toolbar of the Output window. Copy Path This command copies the full path of the currently selected item to the clipboard. It can be executed by right-clicking one of these nodes in the Solution Explorer: The solution node; A project node; Any project item node; Any folder. Email CodeSnippet To email the lines of text you select in the code editor, right-click anywhere in the editor and then click Email CodeSnippet. Insert Guid Attribute This command adds a Guid attribute to a selected class. From the code editor, right-click anywhere within the class definition, then click Insert Guid Attribute. Show All Files This command shows the hidden files in all projects displayed in the Solution Explorer when the solution node is selected. It enhances the Show All Files button, which normally shows only the hidden files in the selected project node. Undo Close This command reopens a closed document , returning the cursor to its last position. To reopen the most recently closed document, point to the Edit menu, then click Undo Close. Alternately, you can use the CtrlShiftZ shortcut. To reopen any other recently closed document, point to the View menu, click Other Windows, and then click Undo Close Window. The Undo Close window appears, typically next to the Output window. Double-click any document in the list to reopen it. Collapse Projects This command collapses a project or projects in the Solution Explorer starting from the root selected node. Collapsing a project can increase the readability of the solution. This command can be executed from three different places: solution, solution folders and project nodes respectively. Copy Class This command copies a selected class entire content to the clipboard, renaming the class. This command is normally followed by a Paste Class command, which renames the class to avoid a compilation error. It can be executed from a single project item or a project item with dependent sub items. Paste Class This command pastes a class entire content from the clipboard, renaming the class to avoid a compilation error. This command is normally preceded by a Copy Class command. It can be executed from a project or folder node. Copy References This command copies a reference or set of references to the clipboard. It can be executed from the references node, a single reference node or set of reference nodes. Paste References This command pastes a reference or set of references from the clipboard. It can be executed from different places depending on the type of project. For CSharp projects it can be executed from the references node. For Visual Basic and Website projects it can be executed from the project node. Copy As Project Reference This command copies a project as a project reference to the clipboard. It can be executed from a project node. Edit Project File This command opens the MSBuild project file for a selected project inside Visual Studio. It combines the existing Unload Project and Edit Project commands. Open Containing Folder This command opens a Windows Explorer window pointing to the physical path of a selected item. It can be executed from a project item node Open Command Prompt This command opens a Visual Studio command prompt pointing to the physical path of a selected item. It can be executed from four different places: solution, project, folder and project item nodes respectively. Unload Projects This command unloads all projects in a solution. This can be useful in MSBuild scenarios when multiple projects are being edited. This command can be executed from the solution node. Reload Projects This command reloads all unloaded projects in a solution. It can be executed from the solution node. Remove and Sort Usings This command removes and sort using statements for all classes given a project. It is useful, for example, in removing or organizing the using statements generated by a wizard. This command can be executed from a solution node or a single project node. Extract Constant This command creates a constant definition statement for a selected text. Extracting a constant effectively names a literal value, which can improve readability. This command can be executed from the code editor by right-clicking selected text. Clear Recent File List This command clears the Visual Studio recent file list. The Clear Recent File List command brings up a Clear File dialog which allows any or all recent files to be selected. Clear Recent Project List This command clears the Visual Studio recent project list. The Clear Recent Project List command brings up a Clear File dialog which allows any or all recent projects to be selected. Transform Templates This command executes a custom tool with associated text templates items. It can be executed from a DSL project node or a DSL folder node. Close All This command closes all documents. It can be executed from a document tab. How to temporarily disable extensions Extensions provide a great way to make Visual Studio even more powerful, and can help improve your overall productivity.  One thing to keep in mind, though, is that extensions run within the Visual Studio process (DevEnv.exe) and so a bug within an extension can impact both the stability and performance of Visual Studio.  If you ever run into a situation where things seem slower than they should, or if you crash repeatedly, please temporarily disable any installed extensions and see if that fixes the problem.  You can do this for extensions that were installed via the online gallery by re-running the extension manager (using the Tools->Extension Manager menu option) and by selecting the “Installed Extensions” node on the top-left of the dialog – and then by clicking “Disable” on any of the extensions within your installed list: Hope this helps, Scott

    Read the article

  • SQLAuthority News – Free eBook Download – Introducing Microsoft SQL Server 2008 R2

    - by pinaldave
    Microsoft Press has published FREE eBook on the most awaiting release of SQL Server 2008 R2. The book is written by Ross Mistry and Stacia Misner. Ross is my personal friend and one of the most active book writer in SQL Server Domain. When I see his name on any book, I am sure that it will be high quality and easy to read book. The details about the book is here: Introducing Microsoft SQL Server 2008 R2, by Ross Mistry and Stacia Misner The book contains 10 chapters and 216 pages. PART I   Database Administration CHAPTER 1   SQL Server 2008 R2 Editions and Enhancements CHAPTER 2   Multi-Server Administration CHAPTER 3   Data-Tier Applications CHAPTER 4   High Availability and Virtualization Enhancements CHAPTER 5   Consolidation and Monitoring PART II   Business Intelligence Development CHAPTER 6   Scalable Data Warehousing CHAPTER 7   Master Data Services CHAPTER 8   Complex Event Processing with StreamInsight CHAPTER 9   Reporting Services Enhancements CHAPTER 10   Self-Service Analysis with PowerPivot More detail about the book is listed here. You can download the ebook in XPS format here and in PDF format here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Google Books Downloader Downloads Google Books to PDF and JPG

    - by Jason Fitzpatrick
    If you’re looking for a way to download and format shift books and magazines found in Google Books, Google Books Downloader can help. Google Books Downloader takes what you can see in Google Books and downloads it either as a PDF file or as a series of JPGs. The key to using Google Books Downloader successfully is to understand that it can only download what you can see sitting at your computer. It isn’t tapping into some secret back-end resource at Google to siphon books down; it is simply converting the pages you see that are “stuck” in Google Books into a format you can use elsewhere. As such you can download the entire book if it is marked “Full Preview”, part of the book if it is marked “Preview” (useful if you’re trying to save pages for a research project and don’t need the whole book), and none of the book if it is only “Snippet View”. The process works on anything you can find in Google Books including magazines. Google Books Downloader is free, Windows only. Google Books Downloader [via Addictive Tips] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • Confusion for mime files: magic, magic.mgc, magic.mime

    - by Florence Foo
    I'm using Ubuntu. I'm trying to use ruby gem 'shared-mime-info' for an application I'm writing. I understand that magic.mgc is a compiled version of magic file which has magic number definitions for the different file types. BUT I don't understand why is it /usr/share/mime/magic is in binary format instead of just normal text file with each parameters separated by white space like everywhere else I'm finding on the internet when it's referencing this file? The /usr/share/mime/magic has the word 'MIME-Magic' at the beginning of the file and prioritize the rest of the stuff like. So it doesn't look like magic.mgc at all. [100:application/vnd.scribus] >1=^@^KSCRIBUSUTF8 [90:application/vnd.stardivision.writer] >2089=^@ shared-mime-info seems to want a magic file in the binary non compiled format as above and I wanted to add definition for DOCX but how does one update or generate this file without using a hex editor? There is a reference to the magic file I found at: http://standards.freedesktop.org/shared-mime-info-spec/shared-mime-info-spec-latest.html And it mention this file is updated with update-mime-database but what if I just want to add some new entry to it. hex editor? Anyway I ended up using hexer to make a new magic file in ~/.local/share/mime/ with only the entry I wanted to add and the MIME-Magic header. Seems to work (assuming I will ever deal with docx for now). 00000000: 4d 49 4d 45 2d 4d 61 67 69 63 00 0a 5b 36 30 3a MIME-Magic..[60: 00000010: 61 70 70 6c 69 63 61 74 69 6f 6e 2f 76 6e 64 2e application/vnd. 00000020: 6f 70 65 6e 78 6d 6c 66 6f 72 6d 61 74 73 2d 6f openxmlformats-o 00000030: 66 66 69 63 65 64 6f 63 75 6d 65 6e 74 2e 77 6f fficedocument.wo 00000040: 72 64 70 72 6f 63 65 73 73 69 6e 67 6d 6c 2e 64 rdprocessingml.d 00000050: 6f 63 75 6d 65 6e 74 5d 0a 3e 30 3d 00 08 50 4b ocument].>0=..PK 00000060: 03 04 14 00 06 00 0a -- -- -- -- -- -- -- -- -- .......---------

    Read the article

  • How do I align my partition table properly?

    - by Jorge Castro
    I am in the process of building my first RAID5 array. I've used mdadm to create the following set up: root@bondigas:~# mdadm --detail /dev/md1 /dev/md1: Version : 00.90 Creation Time : Wed Oct 20 20:00:41 2010 Raid Level : raid5 Array Size : 5860543488 (5589.05 GiB 6001.20 GB) Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Wed Oct 20 20:13:48 2010 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 1% complete UUID : f6dc829e:aa29b476:edd1ef19:85032322 (local to host bondigas) Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 4 8 64 3 spare rebuilding /dev/sde While that's going I decided to format the beast with the following command: root@bondigas:~# mkfs.ext4 /dev/md1p1 mke2fs 1.41.11 (14-Mar-2010) /dev/md1p1 alignment is offset by 63488 bytes. This may result in very poor performance, (re)-partitioning suggested. Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=16 blocks, Stripe width=48 blocks 97853440 inodes, 391394047 blocks 19569702 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 11945 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Writing inode tables: ^C 27/11945 root@bondigas:~# ^C I am unsure what to do about "/dev/md1p1 alignment is offset by 63488 bytes." and how to properly partition the disks to match so I can format it properly.

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • REST: How to store and reuse REST call queries

    - by Jason Holland
    I'm learning C# by programming a real monstrosity of an application for personal use. Part of my application uses several SPARQL queries like so: const string ArtistByRdfsLabel = @" PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?artist WHERE {{ {{ ?artist rdf:type <http://dbpedia.org/ontology/MusicalArtist> . ?artist rdfs:label ?rdfsLabel . }} UNION {{ ?artist rdf:type <http://dbpedia.org/ontology/Band> . ?artist rdfs:label ?rdfsLabel . }} FILTER ( str(?rdfsLabel) = '{0}' ) }}"; string Query = String.Format(ArtistByRdfsLabel, Artist); I don't like the idea of keeping all these queries in the same class that I'm using them in so I thought I would just move them into their own dedicated class to remove clutter in my RestClient class. I'm used to working with SQL Server and just wrapping every query in a stored procedure but since this is not SQL Server I'm scratching my head on what would be the best for these SPARQL queries. Are there any better approaches to storing these queries using any special C# language features (or general, non C# specific, approaches) that I may not already know about? EDIT: Really, these SPARQL queries aren't anything special. Just blobs of text that I later want to grab, insert some parameters into via String.Format and send in a REST call. I suppose you could think of them the same as any SQL query that is kept in the application layer, I just never practiced keeping SQL queries in the application layer so I'm wondering if there are any "standard" practices with this type of thing.

    Read the article

  • April 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the April 2013 release of the Ajax Control Toolkit. For this release, we focused on improving two controls: the AjaxFileUpload and the MaskedEdit controls. You can download the latest release from CodePlex at http://AjaxControlToolkit.CodePlex.com or, better yet, you can execute the following NuGet command within Visual Studio 2010/2012: There are three builds of the Ajax Control Toolkit: .NET 3.5, .NET 4.0, and .NET 4.5. A Better AjaxFileUpload Control We completely rewrote the AjaxFileUpload control for this release. We had two primary goals. First, we wanted to support uploading really large files. In particular, we wanted to support uploading multi-gigabyte files such as video files or application files. Second, we wanted to support showing upload progress on as many browsers as possible. The previous version of the AjaxFileUpload could show upload progress when used with Google Chrome or Mozilla Firefox but not when used with Apple Safari or Microsoft Internet Explorer. The new version of the AjaxFileUpload control shows upload progress when used with any browser. Using the AjaxFileUpload Control Let me walk-through using the AjaxFileUpload in the most basic scenario. And then, in following sections, I can explain some of its more advanced features. Here’s how you can declare the AjaxFileUpload control in a page: <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload ID="AjaxFileUpload1" AllowedFileTypes="mp4" OnUploadComplete="AjaxFileUpload1_UploadComplete" runat="server" /> The exact appearance of the AjaxFileUpload control depends on the features that a browser supports. In the case of Google Chrome, which supports drag-and-drop upload, here’s what the AjaxFileUpload looks like: Notice that the page above includes two Ajax Control Toolkit controls: the AjaxFileUpload and the ToolkitScriptManager control. You always need to include the ToolkitScriptManager with any page which uses Ajax Control Toolkit controls. The AjaxFileUpload control declared in the page above includes an event handler for its UploadComplete event. This event handler is declared in the code-behind page like this: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to App_Data folder AjaxFileUpload1.SaveAs(MapPath("~/App_Data/" + e.FileName)); } This method saves the uploaded file to your website’s App_Data folder. I’m assuming that you have an App_Data folder in your project – if you don’t have one then you need to create one or you will get an error. There is one more thing that you must do in order to get the AjaxFileUpload control to work. The AjaxFileUpload control relies on an HTTP Handler named AjaxFileUploadHandler.axd. You need to declare this handler in your application’s root web.config file like this: <configuration> <system.web> <compilation debug="true" targetFramework="4.5" /> <httpRuntime targetFramework="4.5" maxRequestLength="42949672" /> <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </httpHandlers> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit"/> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="4294967295"/> </requestFiltering> </security> </system.webServer> </configuration> Notice that the web.config file above also contains configuration settings for the maxRequestLength and maxAllowedContentLength. You need to assign large values to these configuration settings — as I did in the web.config file above — in order to accept large file uploads. Supporting Chunked File Uploads Because one of our primary goals with this release was support for large file uploads, we added support for client-side chunking. When you upload a file using a browser which fully supports the HTML5 File API — such as Google Chrome or Mozilla Firefox — then the file is uploaded in multiple chunks. You can see chunking in action by opening F12 Developer Tools in your browser and observing the Network tab: Notice that there is a crazy number of distinct post requests made (about 360 distinct requests for a 1 gigabyte file). Each post request looks like this: http://localhost:24338/AjaxFileUploadHandler.axd?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&fileId=B7CCE31C-6AB1-BB28-2940-49E0C9B81C64 &fileName=Sita_Sings_the_Blues_480p_2150kbps.mp4&chunked=true&firstChunk=false Each request posts another chunk of the file being uploaded. Notice that the request URL includes a chunked=true parameter which indicates that the browser is breaking the file being uploaded into multiple chunks. Showing Upload Progress on All Browsers The previous version of the AjaxFileUpload control could display upload progress only in the case of browsers which fully support the HTML5 File API. The new version of the AjaxFileUpload control can display upload progress in the case of all browsers. If a browser does not fully support the HTML5 File API then the browser polls the server every few seconds with an Ajax request to determine the percentage of the file that has been uploaded. This technique of displaying progress works with any browser which supports making Ajax requests. There is one catch. Be warned that this new feature only works with the .NET 4.0 and .NET 4.5 versions of the AjaxControlToolkit. To show upload progress, we are taking advantage of the new ASP.NET HttpRequest.GetBufferedInputStream() and HttpRequest.GetBufferlessInputStream() methods which are not supported by .NET 3.5. For example, here is what the Network tab looks like when you use the AjaxFileUpload with Microsoft Internet Explorer: Here’s what the requests in the Network tab look like: GET /WebForm1.aspx?contextKey={DA8BEDC8-B952-4d5d-8CC2-59FE922E2923}&poll=1&guid=9206FF94-76F9-B197-D1BC-EA9AD282806B HTTP/1.1 Notice that each request includes a poll=1 parameter. This parameter indicates that this is a polling request to get the size of the file buffered on the server. Here’s what the response body of a request looks like when about 20% of a file has been uploaded: Buffering to a Temporary File When you upload a file using the AjaxFileUpload control, the file upload is buffered to a temporary file located at Path.GetTempPath(). When you call the SaveAs() method, as we did in the sample page above, the temporary file is copied to a new file and then the temporary file is deleted. If you don’t call the SaveAs() method, then you must ensure that the temporary file gets deleted yourself. For example, if you want to save the file to a database then you will never call the SaveAs() method and you are responsible for deleting the file. The easiest way to delete the temporary file is to call the AjaxFileUploadEventArgs.DeleteTemporaryData() method in the UploadComplete handler: protected void AjaxFileUpload1_UploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Save uploaded file to a database table e.DeleteTemporaryData(); } You also can call the static AjaxFileUpload.CleanAllTemporaryData() method to delete all temporary data and not only the temporary data related to the current file upload. For example, you might want to call this method on application start to ensure that all temporary data is removed whenever your application restarts. A Better MaskedEdit Extender This release of the Ajax Control Toolkit contains bug fixes for the top-voted issues related to the MaskedEdit control. We closed over 25 MaskedEdit issues. Here is a complete list of the issues addressed with this release: · 17302 MaskedEditExtender MaskType=Date, Mask=99/99/99 Undefined JS Error · 11758 MaskedEdit causes error in JScript when working with 2-digits year · 18810 Maskededitextender/validator Date validation issue · 23236 MaskEditValidator does not work with date input using format dd/mm/yyyy · 23042 Webkit based browsers (Safari, Chrome) and MaskedEditExtender · 26685 MaskedEditExtender@(ClearMaskOnLostFocus=false) adds a zero character when you each focused to target textbox · 16109 MaskedEditExtender: Negative amount, followed by decimal, sets value to positive · 11522 MaskEditExtender of AjaxtoolKit-1.0.10618.0 does not work properly for Hungarian Culture · 25988 MaskedEditExtender – CultureName (HU-hu) > DateSeparator · 23221 MaskedEditExtender date separator problem · 15233 Day and month swap in Dynamic user control · 15492 MaskedEditExtender with ClearMaskOnLostFocus and with MaskedEditValidator with ClientValidationFunction · 9389 MaskedEditValidator – when on no entry · 11392 MaskedEdit Number format messed up · 11819 MaskedEditExtender erases all values beyond first comma separtor · 13423 MaskedEdit(Extender/Validator) combo problem · 16111 MaskedEditValidator cannot validate date with DayMonthYear in UserDateFormat of MaskedEditExtender · 10901 MaskedEdit: The months and date fields swap values when you hit submit if UserDateFormat is set. · 15190 MaskedEditValidator can’t make use of MaskedEditExtender’s UserDateFormat property · 13898 MaskedEdit Extender with custom date type mask gives javascript error · 14692 MaskedEdit error in “yy/MM/dd” format. · 16186 MaskedEditExtender does not handle century properly in a date mask · 26456 MaskedEditBehavior. ConvFmtTime : function(input,loadFirst) fails if this._CultureAMPMPlaceholder == “” · 21474 Error on MaskedEditExtender working with number format · 23023 MaskedEditExtender’s ClearMaskOnLostFocus property causes problems for MaskedEditValidator when set to false · 13656 MaskedEditValidator Min/Max Date value issue Conclusion This latest release of the Ajax Control Toolkit required many hours of work by a team of talented developers. I want to thank the members of the Superexpert team for the long hours which they put into this release.

    Read the article

  • Parsing scripts that use curly braces

    - by Keikoku
    To get an idea of what I'm doing, I am writing a python parser that will parse directx .x text files. The problem I have deals with how the files are formatted. Although I'm writing it in python, I'm looking for general algorithms for dealing with this sort of parsing. .x files define data using templates. The format of a template is template_name { [some_data] } The goal I have is to parse the file line-by-line and whenever I come across a template, I will deal with it accordingly. My initial approach was to check if a line contains an opening or closing brace. If it's an open brace, then I will check what the template name is. Now the catch here is that the open brace doesn't have to occur on the same line as the template name. It could just as well be template_name { [some_data] } So if I were to use my "open brace exists" criteria, it won't work for any files that use the latter format. A lot of languages also use curly braces (though I'm not sure when people would be parsing the scripts themselves), so I was wondering if anyone knows how to accurately get the template name (or in some other languages, it could just as well be a function name, though there aren't any keywords to look for)

    Read the article

  • Page debugging got easier in UCM 11g

    - by kyle.hatlestad
    UCM is famous for it's extra parameters you can add to the URL to do different things. You can add &IsJava=1 to get all of the local data and result set information that comes back from the idc_service. You can add &IsSoap=1 and get back a SOAP message with that information. Or &IsJson=1 will send it in JSON format. There are ones that change the display like &coreContentOnly=1 which will hide the footer and navigation on the page. In 10g, you could add &ScriptDebugTrace=1 and it would display the list of resources that were called through includes or eval functions at the bottom of the page. And it would list them in nested order so you could see the order in which they were called and which components overrode each other. But in 11g, that parameter flag no longer works. Instead, you get a much more powerful one called &IsPageDebug=1. When you add that to a page, you get a small gray tab at the bottom right-hand part of the browser window. When you click it, it will expand and let you choose several pieces of information to display. You can select 'idocscript trace' and display the nested includes you used to get with ScriptDebugTrace. You can select 'initial binder' and see the local data and result sets coming back from the service, just as you would with IsJava. But in this display, it formats the results in easy to read tables (instead of raw HDA format). Then you can get the final binder which would contain all of the local data and result sets after executing all of the includes for the display of the page (and not just from the Service call). And then a 'javascript log' for reporting on the javascript functions and times being executed on the page. Together, these new data displays make page debugging much easier in 11g. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • guvcview recording video and audio out of synchronisation in Ubuntu 10.10

    - by SIJAR
    I finally got Guvcview, a great software for Logitech webcam and it does all the stuff that one wants out of it. But I'm not satisfy with the video recording, video and audio out of synchronisation also video seems to be in slow motion. Please help so that I can tweak in and get a good video recording with the webcam. Below is the log of Guvcview ------------------------------------------------------------------------------- guvcview 1.4.1 video_device: /dev/video0 vid_sleep: 0 cap_meth: 1 resolution: 640 x 480 windowsize: 1024 x 715 vert pane: 578 spin behavior: 0 mode: mjpg fps: 1/25 Display Fps: 0 bpp: 0 hwaccel: 1 avi_format: 4 sound: 1 sound Device: 4 sound samp rate: 0 sound Channels: 0 Sound delay: 0 nanosec Sound Format: 85 Pan Step: 2 degrees Tilt Step: 2 degrees Video Filter Flags: 0 image inc: 0 profile(default):/home/sijar/default.gpfl starting portaudio... bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started language catalog= dir:/usr/share/locale type:UTF-8 lang:en_US.utf8 cat:guvcview.mo mjpg: setting format to 1196444237 capture method = 1 video device: /dev/video0 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! /dev/video0 - device 1 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! Init. UVC Camera (046d:0825) (location: usb-0000:00:1d.7-5) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 176 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 432, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 544, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 640, height = 360 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, ... repeats a couple of times ... vid:046d pid:0825 driver:uvcvideo Adding control for Pan (relative) UVCIOC_CTRL_ADD - Error: Operation not permitted checking format: 1196444237 VIDIOC_G_COMP:: Invalid argument compression control not supported fps is set to 1/25 drawing controls control[0]: 0x980900 Brightness, 0:255:1, default 128 control[0]: 0x980901 Contrast, 0:255:1, default 32 control[0]: 0x980902 Saturation, 0:255:1, default 32 control[0]: 0x98090c White Balance Temperature, Auto, 0:1:1, default 1 control[0]: 0x980913 Gain, 0:255:1, default 0 control[0]: 0x980918 Power Line Frequency, 0:2:1, default 2 control[0]: 0x98091a White Balance Temperature, 0:10000:10, default 4000 control[0]: 0x98091b Sharpness, 0:255:1, default 24 control[0]: 0x98091c Backlight Compensation, 0:1:1, default 1 control[0]: 0x9a0901 Exposure, Auto, 0:3:1, default 3 control[0]: 0x9a0902 Exposure (Absolute), 1:10000:1, default 166 control[0]: 0x9a0903 Exposure, Auto Priority, 0:1:1, default 0 resolutions of format(2) = 19 frame rates of 1º resolution=6 Def. Res: 0 numb. fps:6 --------------------------------------- device #0 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 (hw:0,0) Host API = ALSA Max inputs = 2, Max outputs = 2 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #1 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC ADC (hw:0,1) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #2 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC2 ADC (hw:0,2) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #3 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - ADC2 (hw:0,3) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #4 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - IEC958 (hw:0,4) Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #5 Name = USB Device 0x46d:0x825: USB Audio (hw:1,0) Host API = ALSA Max inputs = 1, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #6 Name = front Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.012 Def. high input latency = -1.000 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #7 Name = iec958 Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #8 Name = spdif Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #9 Name = pulse Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #10 Name = dmix Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.043 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #11 [ Default Input, Default Output ] Name = default Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 ---------------------------------------------- SampleRate:0 Channels:0 Video driver: x11 A window manager is available VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25371756K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cbd8b0]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cbd8b0]profile Baseline, level 3.0 [libx264 @ 0x8cbd8b0]non-strictly-monotonic PTS shift sound by -9 ms shift sound by -9 ms shift sound by -9 ms AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... AUDIO: droping audio data (/home/sijar/Videos/Webcam) 25371748K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... Cap Video toggled: 0 Shuting Down IO Thread AUDIO: droping audio data stop= 4426644744000 start=4416533023000 VIDEO: 146 frames in 10111.000000 ms = 14.439719 fps Stoping audio stream Closing audio stream... close avi Last message repeated 145 times [libx264 @ 0x8cbd8b0]frame I:2 Avg QP:14.10 size: 24492 [libx264 @ 0x8cbd8b0]frame P:103 Avg QP:16.06 size: 20715 [libx264 @ 0x8cbd8b0]mb I I16..4: 48.4% 0.0% 51.6% [libx264 @ 0x8cbd8b0]mb P I16..4: 57.5% 0.0% 0.0% P16..4: 40.2% 0.0% 0.0% 0.0% 0.0% skip: 2.3% [libx264 @ 0x8cbd8b0]final ratefactor: 62.05 [libx264 @ 0x8cbd8b0]coded y,uvDC,uvAC intra: 79.7% 92.2% 68.4% inter: 62.4% 87.5% 48.0% [libx264 @ 0x8cbd8b0]i16 v,h,dc,p: 23% 17% 41% 19% [libx264 @ 0x8cbd8b0]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 30% 24% 26% 2% 5% 3% 3% 3% 4% [libx264 @ 0x8cbd8b0]i8c dc,h,v,p: 53% 20% 23% 4% [libx264 @ 0x8cbd8b0]ref P L0: 63.0% 37.0% [libx264 @ 0x8cbd8b0]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25379744K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cfba20]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cfba20]profile Baseline, level 3.0 [libx264 @ 0x8cfba20]non-strictly-monotonic PTS shift sound by -236 ms shift sound by -236 ms shift sound by -236 ms (/home/sijar/Videos/Webcam) 25377044K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25373408K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25370696K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25367680K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25364052K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25360312K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25356628K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25352908K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25349316K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25345552K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25341828K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25338092K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25334412K bytes free on a total of 39908968K (used: 36 %) treshold=51200K Cap Video toggled: 0 Shuting Down IO Thread stop= 4708817235000 start=4578624714000 VIDEO: 1604 frames in 130192.000000 ms = 12.320265 fps Stoping audio stream Closing audio stream... close avi Last message repeated 1603 times [libx264 @ 0x8cfba20]frame I:16 Avg QP:14.78 size: 42627 [libx264 @ 0x8cfba20]frame P:1547 Avg QP:16.44 size: 28599 [libx264 @ 0x8cfba20]mb I I16..4: 21.6% 0.0% 78.4% [libx264 @ 0x8cfba20]mb P I16..4: 28.1% 0.0% 0.0% P16..4: 70.5% 0.0% 0.0% 0.0% 0.0% skip: 1.4% [libx264 @ 0x8cfba20]final ratefactor: 88.17 [libx264 @ 0x8cfba20]coded y,uvDC,uvAC intra: 74.4% 95.8% 83.2% inter: 75.2% 94.6% 69.2% [libx264 @ 0x8cfba20]i16 v,h,dc,p: 27% 17% 40% 16% [libx264 @ 0x8cfba20]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 25% 21% 3% 6% 4% 5% 4% 7% [libx264 @ 0x8cfba20]i8c dc,h,v,p: 61% 18% 18% 4% [libx264 @ 0x8cfba20]ref P L0: 64.0% 36.0% [libx264 @ 0x8cfba20]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Shuting Down Thread Thread terminated... cleaning Thread allocations: 100% SDL Quit Video Thread finished write /home/sijar/.guvcviewrc OK free audio mutex closed v4l2 strutures free controls free controls - vidState cleaned allocations - 100% Closing portaudio ...OK Closing GTK... OK

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Easy BCD Help: Dual boot Win7 and Ubuntu 11.10 -- "Add new Entry" for Ubuntu

    - by Bradley Peterson
    I first had Ubuntu 11.10 installed on a single partition on my 750GB hard drive. I then partitioned the hard drive to 500GB (for Ubuntu) in ext4 format (what it already was from the clean install of Ubuntu)....and 250GB for Win7 in NFTS format. Then I installed Win7 onto that 250GB partition. Installation went smoothly and I was successfully booted into Win7 and setting everything up. After I was done doing all the stupid updates from Microsof, I thought I was done and I wanted to go back to Ubuntu. This is where the problem starts Of course I reboot and it goes directly to Win7. I research and find that Win7 has overwritten the Ubuntu bootloader, etc etc.. I don't fully understand it. I download EasyBCD 2.1.2 In EasyBCD, I select "Add New Entry" and select "Linux/BSD" and change the type to "GRUB 2" and name it "Ubuntu" Next, I go to "BCD Deployment" and select "Install the Windows Vista/7 bootloader to the MBR" and click "Write MBR" I reboot, select "Ubuntu" and the purple screen comes up, but NOTHING HAPPENS. If I hit Ctrl+Alt+Del, it goes to the Login menu where it acts normal for about 10-15 seconds, then freezes. It does this repeatedly every time. MY QUESTION: What's wrong here? Why can't I load Ubuntu now? Am I going to have to reinstall Ubuntu with Windows, then set up the bootloader with EasyBCD instead of Ubuntu, THEN Win7? Any and all help is appreciated! -Brad

    Read the article

  • Convertion of tiff image in Python script - OCR using tesseract

    - by PYTHON TEAM
    I want to convert a tiff image file to text document. My code perfectly as I expected to convert tiff images with usual font but its not working for french script font . My tiff image file contains text. The font of text is in french script format.I here is my code import Image import subprocess import util import errors tesseract_exe_name = 'tesseract' # Name of executable to be called at command line scratch_image_name = "temp.bmp" # This file must be .bmp or other Tesseract-compatible format scratch_text_name_root = "temp" # Leave out the .txt extension cleanup_scratch_flag = True # Temporary files cleaned up after OCR operation def call_tesseract(input_filename, output_filename): """Calls external tesseract.exe on input file (restrictions on types), outputting output_filename+'txt'""" args = [tesseract_exe_name, input_filename, output_filename] proc = subprocess.Popen(args) retcode = proc.wait() if retcode!=0: errors.check_for_errors() def image_to_string(im, cleanup = cleanup_scratch_flag): """Converts im to file, applies tesseract, and fetches resulting text. If cleanup=True, delete scratch files after operation.""" try: util.image_to_scratch(im, scratch_image_name) call_tesseract(scratch_image_name, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text def image_file_to_string(filename, cleanup = cleanup_scratch_flag, graceful_errors=True): If cleanup=True, delete scratch files after operation.""" try: try: call_tesseract(filename, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) except errors.Tesser_General_Exception: if graceful_errors: im = Image.open(filename) text = image_to_string(im, cleanup) else: raise finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text if __name__=='__main__': im = Image.open("/home/oomsys/phototest.tif") text = image_to_string(im) print text try: text = image_file_to_string('fnord.tif', graceful_errors=False) except errors.Tesser_General_Exception, value: print "fnord.tif is incompatible filetype. Try graceful_errors=True" print value text = image_file_to_string('fnord.tif', graceful_errors=True) print "fnord.tif contents:", text text = image_file_to_string('fonts_test.png', graceful_errors=True) print text

    Read the article

  • How to set Alpha value from pixel shader in SlimDX Direct3d9

    - by Yashwinder
    I am trying to set alpha value of color as color.a = 0.5f in my pixel shader but all the time it is giving an exception. I can set color.r, color.g, color.b but it is not allowing me to set color.a and throwing an exception D3DERR_INVALIDCALL: Invalid call (-2005530516). I have just created a direct3d9 device and assigned my pixel shader to it. My pixel shader code is as below sampler2D ourImage : register(s0); float4 main(float2 locationInSource : TEXCOORD) : COLOR { float4 color = tex2D( ourImage , locationInSource.xy); color.a = 0.2; return color; } I am creating my pixel shader as byte[] byteCode = GiveFxFile(transitionEffect.PixelShaderFileName); var shaderBytecode = ShaderBytecode.Compile(byteCode, "main", "ps_2_0", ShaderFlags.None); var pixelShader = new PixelShader(device, ShaderBytecode); _device.PixelShader=pixelShader; I have initialized my device as var _presentParams = new PresentParameters { Windowed = _isWindowedMode, BackBufferWidth = (int)SystemParameters.PrimaryScreenWidth, BackBufferHeight = (int)SystemParameters.PrimaryScreenHeight, // Enable Z-Buffer // This is not really needed in this sample but real applications generaly use it EnableAutoDepthStencil = true, AutoDepthStencilFormat = Format.D16, // How to swap backbuffer in front and how many per screen refresh BackBufferCount = 1, SwapEffect = SwapEffect.Copy, BackBufferFormat = _direct3D.Adapters[0].CurrentDisplayMode.Format, PresentationInterval = PresentInterval.Immediate, DeviceWindowHandle = _windowHandle }; _device = new Device(_direct3D, 0, DeviceType.Hardware, _windowHandle, deviceFlags | CreateFlags.Multithreaded, _presentParams);

    Read the article

  • GPLv2 - Multiple AI chess engines to bypass GPL

    - by Dogbert
    I have gone through a number of GPL-related questions, the most recent being this one: http://stackoverflow.com/questions/3248823/legal-question-about-the-gpl-license-net-dlls/3249001#3249001 I'm trying to see how this would work, so bear with me. I have a simple GUI interface for a game of Chess. It essentially can send/receive commands to/from an external chess engine (ie: Tong, Fruit, etc). The application/GUI is similar in nature to XBoard ( http://www.gnu.org/software/xboard/ ), but was independently designed. After going through a number of threads on this topic, it seems that the FSF considers dynamically linking against a GPLv2 library as a derivative work, and that by doing so, the GPLv2 extends to my proprietary code, and I must release the source to my entire project. Other legal precedents indicate the opposite, and that dynamic linking doesn't cause the "viral" effect of the GPL to propagate to my proprietary code. Since there is no official consensus that can give a "hard-and-fast" answer to the dynamic linking question, would this be an acceptable alternative: I build my chess GUI so that it sends/receives the chess engine AI logic as text commands from an external interface library that I write The interface library I wrote itself is then released under the GPL The interface library is only used to communicate via a generic text pipe to external command-line chess engines The chess engine itself would be built as a command-line utility rather than as a library of any sort, and just sends strings in the Universal Chess Interface of Chess Engine Communication Protocol ( http://en.wikipedia.org/wiki/Chess_Engine_Communication_Protocol ) format. The one "gotcha" is that the interface library should not be specific to one single GPL'ed chess engine, otherwise the entire GUI would be "entirely dependent" on it. So, I just make my interface library so that it is able to connect to any command-line chess engine that uses a specific format, rather than just one unique engine. I could then include pre-built command-line-app versions of any of the chess engines I'm using. Would that sort of approach allow me to do the following: NOT release the source for my UI Release the source of the interface library I built (if necessary) Use one or more chess engines and bundle them as external command-line utilities that ship with a binary version of my UI Thank you.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >