Search Results

Search found 26993 results on 1080 pages for 'multiple insert'.

Page 422/1080 | < Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >

  • Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively

    - by Gopinath
    Amazon S3 is an dead cheap cloud storage service that offers unlimited storage in pay as you use model. Recently we moved all the images and other static files(scripts & css) of Tech Dreams to Amazon S3 to reduce load on VPS server. Amazon S3 is cheap, but monthly bill will shoot up if images/static files of the blog are not cached properly (more details). By adding caching HTTP Headers Cache-Control or Expires to all the files hosted on Amazon S3 we reduced the monthly bills and also load time of blog pages. Lets see how to add custom headers to files stored on Amazon S3 service. Updating HTTP Headers of one file at a time The web interface of Amazon S3 Management console allows adding custom HTTP headers to one file at a time  through “Properties”  window (to access properties, right on a file and select Properties menu). So if you have to add headers to 100s of files then this is not the way to go! Updating HTTP Headers of multiple files of a folder recursively To update HTTP headers of multiple files in a folder recursively, we can use CloudBerry Explorer freeware or Bucket Explorer trail ware applications. CloudBerry is my favourite as it’s a freeware and also it’s has excellent interface to access Amazon S3 from desktops. Adding HTTP Headers with CloudBerry application is straight forward – right click on the required folders and choose the option “Set HTTP Headers”. Download CloudBerry Explorer This article titled,Amazon S3 Tips: Quickly Add/Modify HTTP Headers To All Files Recursively, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 30, 2010

    CodePlex Daily Summary for Tuesday, March 30, 2010New ProjectsCloudMail: Want to send email from Azure? Cloud Mail is designed to provide a small, effective and reliable solution for sending email from the Azure platfor...CommunityServer Extensions: Here you can find some CommunityServer extensions and bug fixes. The main goal is to provide you with the ability to correct some common problems...ContactSync: ContactSync is a set of .NET libraries, UI controls and applications for managing and synchronizing contact information. It includes managed wrapp...Dng portal: DNG Portal base on asp.net MVCDotNetNuke Referral Tracker: The Referral Tracker module allows you to save URL variables, the referring page, and the previous page into a session variable or cookie. Then, th...Foursquare for Windows Phone 7: Foursquare for Windows Phone 7.GEGetDocConfig: SharePoint utility to list information concerning document libraries in one or more sites. Displays Size, Validity, Folder, Parent, Author, Minor a...Google Maps API for .NET: Fast and lightweight client libraries for Google Maps API.kbcchina: kbc chinaLoad Test User Mock Toolkits: 用途 This project is a framework mocking the user actvities with VSTS Load Test tool to faster the test script development. 此项目包括一套模拟用户行为的通用框架,可以简化...Resonance: Resonance is a system to train neural networks, it allows to automate the train of a neural network, distributing the calculation on multiple machi...SharePoint Company Directory / Active Directory Self Service System: This is a very simple system which was designed for a Bank to allow users to update their contact information within SharePoint . Then this info ca...SmartShelf: Manage files and folders on Windows and Windows Live.sysFix: sysFix is a tool for system administrators to easily manage and fix common system errors.xnaWebcam: Webcam usage in XNA GameStudio 3.1New ReleasesAll-In-One Code Framework: All-In-One Code Framework 2010-03-29: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Samples for Azure Name Des...ARSoft.Tools.Net - C# DNS and SPF Library: 1.3.0: Added support for creating own dns server implementations. Added full IPv6 support to dns client Some performance optimizations to dns clientArtefact Animator: Artefact Animator - Silverlight 3 and WPF .Net 3.5: Artefact Animator Version 2.0.4.1 Silverlight 3 ArtefactSL.dll for both Debug and Release. WPF 3.5 Artefact.dll for both Debug and Release.BatterySaver: Version 0.4: Added support for a system tray icon for controlling the application and switching profiles (Issue)BizTalk Server 2006 Orchestration Profiler: Profiler v1.2: This is a point release of the profiler and has been updatd to work on 64 bit systems. No other new functionality is available. To use this ensure...CloudMail: CloudMail_0.5_beta: Initial public release. For documentation see http://cloudmail.codeplex.com/documentation.CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.44: See Source Code tab for recent change history.Dawf: Dual Audio Workflow: Beta 2: Fix little bugs and improve usablity by changing the way it finds the good audio.DotNetNuke Referral Tracker: 2.0.1: First releaseFoursquare for Windows Phone 7: Foursquare 2010.03.29.02: Foursquare 2010.03.29.02GEGetDocConfig: GEGETDOCCONFIG.ZIP: Installation: Simply download the zip file and extract the executable into its own directory on the SharePoint front end server Note: There will b...GKO Libraries: GKO Libraries 0.2 Beta: Added: Binary search Unmanaged wrappers, interop and pinvoke functions and structures Windows service wrapper Video mode helpers and more.....Google Maps API for .NET: GoogleMapsForNET v0.9 alpha release: First version, contains Core library featuring: Geocoding API Elevation API Static Maps APIGoogle Maps API for .NET: GoogleMapsForNET v0.9.1 alpha release: Fixed dependencies issues; added NUnit binaries and updated Newtonsoft Json library.Google Maps API for .NET: GoogleMapsForNET v0.9.2a alpha release: Recommended update.Code clean-up; did refactoring and major interface changes in Static Maps because it wasn't aligned to the 'simplest and least r...Home Access Plus+: v3.2.0.0: v3.2.0.0 Release Change Log: More AJAX to reduce page refreshes (Deleting, New Folder, Rename moved from browser popups) Only 3 Browser Popups (1...Html to OpenXml: HtmlToOpenXml 1.1: The dll library to include in your project. The dll is signed for GAC support. Compiled with .Net 3.5, Dependencies on System.Drawing.dll and Docu...Latent Semantic Analysis: Latest sources: Just the latest sources. Just click the changeset. Please note that in order to compile this code you need to download some additional code. You ...Load Test User Mock Toolkits: Load Test User Mock Toolkits Help Doc: Samples and The framework introduction. 包括框架介绍和典型示例Load Test User Mock Toolkits: Open.LoadTest.User.Mock.Toolkits 1.0 alpha: 此版本为非正式版本,未对性能方面进行优化。而且框架正在重构调整中。Mobile Broadband Logging Monitor: Mobile Broadband Logging Monitor 1.2.4: This edition supports: Newer and older editions of Birdstep Technology's EasyConnect HUAWEI Mobile Partner MWConn User defined location for s...Nito.KitchenSink: Version 3: Added Encoding.GetString(Stream, bool) for converting an entire stream into a string. Changed Stream.CopyTo to allow the stream to be closed/abor...Numina Application Framework: Numina.Framework Core 49088: Fixed Bug with Headers introduced in rev. 48249 with change to HttpUtil class. admin/User_Pending.aspx page users weren't able to be deleted Do...OAuthLib: OAuthLib (1.6.4.0): Fix for 6390 Make it possible to configure time out value.Quack Quack Says the Duck: Quack Quack Says The Duck 1.1.0.0: This new release pushes some work onto a background thread clearing issues with multiple screen clicks while the UI was blocking.Rapidshare Episode Downloader: RED v0.8.4: - Added Edit feature - Moved season & episode int to string into a separate function - Fixed some more minor issues - Added 'Previous' feature - F...RoTwee: RoTwee (8.1.3.0): Update OAuthLib to 1.6.4.0SharePoint Company Directory / Active Directory Self Service System: SharePoint Company Directory with AD Import: This is a very simple system which was designed for a Bank to allow users to update their contact information within SharePoint . Then this info ca...Simply Classified: v1.00.12: Comsite Simply Classified v1.00.12 - STABLE - Tested against DotNetNuke v4.9.5 and v5.2.x Bug Fixes/Enhancements: BUGFIX: Resolved issues with 1...sPATCH: sPatch v0.9: Completely Recoded with wxWidgetsFollowing Content is different to .NET Patcher no requirement for .NET Framework Manual patch was removed to av...SSAS Profiler Trace Scheduler: SSAS Profiler Trace Scheduler: AS Profiler Scheduler is a tool that will enable Scheduling of SQL AS Tracing using predefined Profiler Templates. For tracking different issues th...sysFix: sysfix build v5: A stable beta release, please refer to home page for further details.VOB2MKV: vob2mkv-1.0.4: This is a feature update of the VOB2MKV utility. The command-line parsing in the VOB2MKV application has been greatly improved. You can now get f...xnaWebcam: xnaWebcam 0.1: xnaWebcam 0.1 Program Version 0.1: -Show Webcam Device -Draw.String WebcamTexture.VideoDevice.Name.ToString() Instructions: 1. Plug-in your Webca...xnaWebcam: xnaWebcam 0.2: xnaWebcam 0.2 Version 0.2: -setResolution -Keys.Escape: this.Exit() << Exit the Game/Application. --- Version 0.1: -Show Webcam Device -Draw.Strin...xnaWebcam: xnaWebcam 0.21: xnaWebcam 0.2 Version 0.21: -Fix: Don't quit game/application after closing mainGameWindow -Fix: Text Position; Window.X, Window.Y --- Version 0.2...Xploit Game Engine: Xploit_1_1 Release: Added Features Multiple Mesh instancing.Xploit Game Engine: Xploit_1_1 Source Code: Updates Create multiple instances of the same Meshe using XModelMesh and XSkinnedMesh.Yakiimo3D: DX11 DirectCompute Buddhabrot Source and Binary: DX11 DirectCompute Buddhabrot/Nebulabrot source and binary.Most Popular ProjectsRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETLINQ to TwitterManaged Extensibility FrameworkMicrosoft Biology FoundationFarseer Physics EngineN2 CMSNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise Library

    Read the article

  • Enhanced REST Support in Oracle Service Bus 11gR1

    - by jeff.x.davies
    In a previous entry on REST and Oracle Service Bus (see http://blogs.oracle.com/jeffdavies/2009/06/restful_services_with_oracle_s_1.html) I encoded the REST query string really as part of the relative URL. For example, consider the following URI: http://localhost:7001/SimpleREST/Products/id=1234 Now, technically there is nothing wrong with this approach. However, it is generally more common to encode the search parameters into the query string. Take a look at the following URI that shows this principle http://localhost:7001/SimpleREST/Products?id=1234 At first blush this appears to be a trivial change. However, this approach is more intuitive, especially if you are passing in multiple parameters. For example: http://localhost:7001/SimpleREST/Products?cat=electronics&subcat=television&mfg=sony The above URI is obviously used to retrieve a list of televisions made by Sony. In prior versions of OSB (before 11gR1PS3), parsing the query string of a URI was more difficult than in the current release. In 11gR1PS3 it is now much easier to parse the query strings, which in turn makes developing REST services in OSB even easier. In this blog entry, we will re-implement the REST-ful Products services using query strings for passing parameter information. Lets begin with the implementation of the Products REST service. This service is implemented in the Products.proxy file of the project. Lets begin with the overall structure of the service, as shown in the following screenshot. This is a common pattern for REST services in the Oracle Service Bus. You implement different flows for each of the HTTP verbs that you want your service to support. Lets take a look at how the GET verb is implemented. This is the path that is taken of you were to point your browser to: http://localhost:7001/SimpleREST/Products/id=1234 There is an Assign action in the request pipeline that shows how to extract a query parameter. Here is the expression that is used to extract the id parameter: $inbound/ctx:transport/ctx:request/http:query-parameters/http:parameter[@name="id"]/@value The Assign action that stores the value into an OSB variable named id. Using this type of XPath statement you can query for any variables by name, without regard to their order in the parameter list. The Log statement is there simply to provided some debugging info in the OSB server console. The response pipeline contains a Replace action that constructs the response document for our rest service. Most of the response data is static, but the ID field that is returned is set based upon the query-parameter that was passed into the REST proxy. Testing the REST service with a browser is very simple. Just point it to the URL I showed you earlier. However, the browser is really only good for testing simple GET services. The OSB Test Console provides a much more robust environment for testing REST services, no matter which HTTP verb is used. Lets see how to use the Test Console to test this GET service. Open the OSB we console (http://localhost:7001/sbconsole) and log in as the administrator. Click on the Test Console icon (the little "bug") next to the Products proxy service in the SimpleREST project. This will bring up the Test Console browser window. Unlike SOAP services, we don't need to do much work in the request document because all of our request information will be encoded into the URI of the service itself. Belore the Request Document section of the Test Console is the Transport section. Expand that section and modify the query-parameters and http-method fields as shown in the next screenshot. By default, the query-parameters field will have the tags already defined. You just need to add a tag for each parameter you want to pass into the service. For out purposes with this particular call, you'd set the quer-parameters field as follows: <tp:parameter name="id" value="1234" /> </tp:query-parameters> Now you are ready to push the Execute button to see the results of the call. That covers the process for parsing query parameters using OSB. However, what if you have an OSB proxy service that needs to consume a REST-ful service? How do you tell OSB to pass the query parameters to the external service? In the sample code you will see a 2nd proxy service called CallREST. It invokes the Products proxy service in exactly the same way it would invoke any REST service. Our CallREST proxy service is defined as a SOAP service. This help to demonstrate OSBs ability to mediate between service consumers and service providers, decreasing the level of coupling between them. If you examine the message flow for the CallREST proxy service, you'll see that it uses an Operational branch to isolate processing logic for each operation that is defined by the SOAP service. We will focus on the getProductDetail branch, that calls the Products REST service using the HTTP GET verb. Expand the getProduct pipeline and the stage node that it contains. There is a single Assign statement that simply extracts the productID from the SOA request and stores it in a local OSB variable. Nothing suprising here. The real work (and the real learning) occurs in the Route node below the pipeline. The first thing to learn is that you need to use a route node when calling REST services, not a Service Callout or a Publish action. That's because only the Routing action has access to the $oubound variable, especially when invoking a business service. The Routing action contains 3 Insert actions. The first Insert action shows how to specify the HTTP verb as a GET. The second insert action simply inserts the XML node into the request. This element does not exist in the request by default, so we need to add it manually. Now that we have the element defined in our outbound request, we can fill it with the parameters that we want to send to the REST service. In the following screenshot you can see how we define the id parameter based on the productID value we extracted earlier from the SOAP request document. That expression will look for the parameter that has the name id and extract its value. That's all there is to it. You now know how to take full advantage of the query parameter parsing capability of the Oracle Service Bus 11gR1PS2. Download the sample source code here: rest2_sbconfig.jar Ubuntu and the OSB Test Console You will get an error when you try to use the Test Console with the Oracle Service Bus, using Ubuntu (or likely a number of other Linux distros also). The error (shown below) will state that the Test Console service is not running. The fix for this problem is quite simple. Open up the WebLogic Server administrator console (usually running at http://localhost:7001/console). In the Domain Structure window on the left side of the console, select the Servers entry under the Environment heading. The select the Admin Server entry in the main window of the console. By default, you should be viewing the Configuration tabe and the General sub tab in the main window. Look for the Listen Address field. By default it is blank, which means it is listening on all interfaces. For some reason Ubuntu doesn't like this. So enter a value like localhost or the specific IP address or DNS name for your server (usually its just localhost in development envirionments). Save your changes and restart the server. Your Test Console will now work correctly.

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google Chrome

    - by The Geek
    Ever tried to figure out exactly how much memory Google Chrome or Internet Explorer is using? Since they each show up a bunch of times in Task Manager, it’s not so easy! Here’s the quick and easy way to compare them. Both Chrome and IE use multiple processes to isolate tabs from each other, to make sure that one tab doesn’t kill the whole browser. Firefox, on the other hand, just uses a single process for everything. Rather than pulling out a calculator and adding them all up, you can just open up Google Chrome, and type in about:memory into the location bar to see a full list of each browser’s memory usage.   On my test system with 6 GB of system RAM, I’m running the Development channel version of Chrome, and I’ve got about 40 different tabs open, which is why the memory usage is so high. Firefox has 8 tabs open, and IE is enjoying being opened for the first time in forever. Want to help cut down on memory usage and keep your Chrome browser running fast? Disable all unnecessary extensions, and then make sure you disable any plug-ins that you don’t need either. Similar Articles Productive Geek Tips Stupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxStupid Geek Tricks: Shrink the XP Volume ControlStupid Geek Tricks: Tile or Cascade Multiple Windows in Windows 7Fix for Firefox memory leak on WindowsHow to Purge Memory in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • Make flash ignore transparent wmode — always display opaque background

    - by Tometzky
    How to make flash movie (an advertising banner) ignore <param name="wmode" value="transparent">? There are some CMS systems which insert flash movies automatically with transparent wmode option. Flash Player ignores banner's background color, makes it transparent and displays it on web page background. I can workaround it using additional layer at the bottom with a large rectangle of desired color, but I think it is inefficient and inelegant. How to do this better?

    Read the article

  • Getting WCF Services in a Silverlight solution to play nice on deployment

    - by brendonpage
    I have come across 2 issues with deploying WCF services in a Silverlight solution, admittedly the one is more of a hiccup, and only occurs if you take the easy way out and reference your services through visual studio. The First Issue This occurs when you deploy your WFC services to an IIS server. When browse to the services using your web browser, you are greeted with “This collection already contains an address with scheme http.  There can be at most one address per scheme in this collection.”. When you make a call to this service from your Silverlight application, you get the extremely helpful “NotFound” error, this error message can be found in the error property of the event arguments on the complete event handler for that call. As it did with me this will leave most people scratching their head, because the very same services work just fine on the ASP.NET Development Web Server and on my local IIS server. Now I’m no server/hosting/IIS expert so I did a bit of searching when I first encountered this issue. I found out this happens because IIS supports multiple address bindings per protocol (http/https/ftp … etc) per web site, but WCF only supports binding to one address per protocol. This causes a problem when the WCF service is hosted on a site with multiple address bindings, because IIS provides all of the bindings to the host factory when running the service. While this problem occurs mainly on shared hosting solutions, it is not limited to shared hosting, it just seems like all shared hosting providers setup sites on their servers with multiple address bindings. For interests sake I added functionality to the example project attached to this post to dump the addresses given to the WCF service by IIS into a log file. This was the output on the shared hosting solution I use: http://mydomain.co.za/Services/TestService.svc http://www.mydomain.co.za/Services/TestService.svc http://mydomain-co-za.win13.wadns.net/Services/TestService.svc http://win13/Services/TestService.svc As you can see all these addresses are for the http protocol, which is where it all goes wrong for WCF. Fixes for the First Issue There are a few ways to get around this. The first being the easiest, target .NET 4! Yes that's right in .NET 4 WCF services support multiple addresses per protocol. This functionality is enabled by an option, which is on by default if you create a new project, you will need to turn on if you are upgrading to .NET 4. To do this set the multipleSiteBindingsEnabled property of the serviceHostingEnviroment tag in the web.config file to true, as shown below: <system.serviceModel>     <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Beware this ONLY works in .NET 4, so if you don’t have a server with .NET 4 installed on that you can deploy to, you will need to employ one of the other work a rounds. The second option will work for .NET 3.5 & 4. For this option all you need to do is modify the web.config file and add baseAddressPrefixFilters to the serviceHostingEnviroment tag as shown below: <system.serviceModel>     <serviceHostingEnvironment>         <baseAddressPrefixFilters>              <add prefix="http://www.mydomain.co.za"/>         </baseAddressPrefixFilters>     </serviceHostingEnvironment> </system.serviceModel> These will be used to filter the list of base addresses that IIS provides to the host factory. When specifying these prefix filters be sure to specify filters which will only allow 1 result through, otherwise the entire exercise will be pointless. There is however a problem with this work a round, you are only allowed to specify 1 prefix filter per protocol. Which means you can’t add filters for all your environments, this will therefore add to the list of things to do before deploying or switching dev machines. The third option is the one I currently employ, it will work for .NET 3, 3.5 & 4, although it is not needed for .NET 4. For this option you create a custom host factory which inherits from the ServiceHostFactory class. In the implementation of the ServiceHostFactory you employ logic to figure out which of the base addresses, that are give by IIS, to use when creating the service host. The logic you use to do this is completely up to you, I have seen quite a few solutions that simply statically reference an index from the list of base addresses, this works for most situations but falls short in others. For instance, if the order of the base addresses where to change, it might end up returning an address that only resolves on the servers local network, like the last one in the example I gave at the beginning. Another instance, if a request comes in on a different protocol, like https, you will be creating the service host using an address which is on the incorrect protocol, like http. To reliably find the correct address to use, I use the address that the service was requested on. To accomplish this I use the HttpContext, which requires the service to operate with AspNetCompatibilityRequirements set on. If for some reason running you services with AspNetCompatibilityRequirements on isn’t an option, you can still use this method, you will just have to come up with your own logic for selecting the correct address. First you will need to enable AspNetCompatibilityRequirements for your hosting environment, to do this you will need to set it to true in the web.config file as shown below: <system.serviceModel>     <serviceHostingEnvironment AspNetCompatibilityRequirements="true" /> </system.serviceModel> You will then need to mark any services that are going to use the custom host factory, to allow AspNetCompatibilityRequirements, as shown below: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class TestService { } Now for the custom host factory, this is where the logic lives that selects the correct address to create service host with. The one i use is shown below: public class CustomHostFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { // // Compose a prefix filter based on the requested uri // string prefixFilter = HttpContext.Current.Request.Url.Scheme + "://" + HttpContext.Current.Request.Url.DnsSafeHost; if (!HttpContext.Current.Request.Url.IsDefaultPort) { prefixFilter += ":" + HttpContext.Current.Request.Url.Port.ToString() + "/"; } // // Find a base address that matches the prefix filter // foreach (Uri baseAddress in baseAddresses) { if (baseAddress.OriginalString.StartsWith(prefixFilter)) { return new ServiceHost(serviceType, baseAddress); } } // // Throw exception if no matching base address was found // throw new Exception("Custom Host Factory: No base address matching '" + prefixFilter + "' was found."); } } The most important line in the custom host factory is the one that returns a new service host. This has to return a service host that specifies only one base address per protocol. Since I filter by the address the request came on in, I only need to create the service host with one address, since this address will always be of the correct protocol. Now you have a custom host factory you have to tell your services to use it. To do this you view the markup of the service by right clicking on it in the solution explorer and choosing “View Markup”. Then you add/set the value of the Factory property to the full namespace path of you custom host factory, as shown below. And that is it done, the service will now use the specified custom host factory. The Second Issue As I mentioned earlier this issue is more of a hiccup, but I thought worthy of a mention so I included it. This issue only occurs when you add a service reference to a Silverlight project. Visual Studio will generate a lot of code for you, part of that generated code is the ServiceReferences.ClientConfig file. This file stores the endpoint configuration that is used when accessing your services using the generated proxy classes. Here is what that file looks like: <configuration>     <system.serviceModel>         <bindings>             <customBinding>                 <binding name="CustomBinding_TestService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>                 <binding name="CustomBinding_BrokenService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>             </customBinding>         </bindings>         <client>             <endpoint address="http://localhost:49347/services/TestService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_TestService"                 contract="TestService.TestService" name="CustomBinding_TestService" />             <endpoint address="http://localhost:49347/Services/BrokenService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_BrokenService"                 contract="BrokenService.BrokenService" name="CustomBinding_BrokenService" />         </client>     </system.serviceModel> </configuration> As you will notice the addresses for the end points are set to the addresses of the services you added the service references from, so unless you are adding the service references from your live services, you will have to change these addresses before you deploy. This is little more than an annoyance really, but it adds to the list of things to do before you can deploy, and if left unchecked that list can get out of control. Fix for the Second Issue The way you would usually access a service added this way is to create an instance of the proxy class like so: BrokenServiceClient proxy = new BrokenServiceClient(); Closer inspection of these generated proxy classes reveals that there are a few overloaded constructors, one of which allows you to specify the end point address to use when creating the proxy. From here all you have to do is come up with some logic that will provide you with the relative path to your services. Since my WCF services are usually hosted in the same project as my Silverlight app I use the class shown below: public class ServiceProxyHelper { /// <summary> /// Create a broken service proxy /// </summary> /// <returns>A broken service proxy</returns> public static BrokenServiceClient CreateBrokenServiceProxy() { Uri address = new Uri(Application.Current.Host.Source, "../Services/BrokenService.svc"); return new BrokenServiceClient("CustomBinding_BrokenService", address.AbsoluteUri); } } Then I will create an instance of the proxy class using my service helper class like so: BrokenServiceClient proxy = ServiceProxyHelper.CreateBrokenServiceProxy(); The way this works is “Application.Current.Host.Source” will return the URL to the ClientBin folder the Silverlight app is hosted in, the “../Services/BrokenService.svc” is then used as the relative path to the service from the ClientBin folder, combined by the Uri object this gives me the URL to my service. The “CustomBinding_BrokenService” is a reference to the end point configuration in the ServiceReferences.ClientConfig file. Yes this means you still need the ServiceReferences.ClientConfig file. All this is doing is using a different end point address than the one specified in the ServiceReferences.ClientConfig file, all the other settings form the ServiceReferences.ClientConfig file are still used when creating the proxy. I have uploaded an example project which covers the custom host factory solution from the first issue and everything from the second issue. I included the code to write a list of base addresses to a log file in my implementation of the custom host factory, this is not need for the custom host factory to function and can safely be removed. Download (WCFServicesDeploymentExample.zip)

    Read the article

  • Safari Mobile Multi-Line <Select> aka GWT Multi-Line ListBox

    - by McTrafik
    Hi guys. Working on a webapp here that must run on the iPad (so, Safari Mobile). I have this code that works fine in just about anything except iPad: <select class="gwt-ListBox" size="12" multiple="multiple"> <option value="Bleeding Eyelashes">Bleeding Eyelashes</option> <option value="Smelly Pupils">Smelly Pupils</option> <option value="Bushy Eyebrows">Bushy Eyebrows</option> <option value="Green Vessels">Green Vessels</option> <option value="Sucky Noses">Sucky Noses</option> </select> What it's supposed to look like is a box with 12 lines ans 5 of them filled up. It works fine in FF, IE, Chrome, Safari Win. But, when I open it on iPad, it's just a single line! Styling it with CSS doesn't work. It just makes the single line bigger if I set the height. Is there a way to make it behave the same way as in normal browsers, or do I nave to make a custom component? Thanks.

    Read the article

  • CodePlex Daily Summary for Friday, July 19, 2013

    CodePlex Daily Summary for Friday, July 19, 2013Popular ReleasesuComponents: uComponents v5.5.0: Following on from our 101355 release, we bring you... v5.5.0! Resolved issues The following issues have been resolved in 5.5.0: 14634 14848 DataType Grid 14840 14842 14845 UpgradingThe best way to upgrade uComponents is to re-install via Umbraco's back-office package installer, (either from the package repository or a local package upload). Do not uninstall an existing uComponents package, as this may lead to data loss! Umbraco compatibilityThis release is compatible with Umbraco 4...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Scryber PDF Generation: Scryber.Installer.v0.8.3: This one has tables! A significant update to the scryber library, with the inclusion of tables. There is no column span, or row span, and nested tables are not fully supported. But - they overflow nicely and are well styled. And they can be data bound. Other updates The Link element has inner components directly as a child, rather than in a Content element. The existing files will still continue to work, but are invalid wrt the schema. Databinding is now supported at the document level, r...Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...GoAgent GUI: GoAgent GUI 1.2.0 ???: *???????,??????????????,?????????????? ????????* ????????(Beta) Beta????:https://goagent.codeplex.com/workitem/list/basic ??????:uygw@outlook.com ????:https://goagent.codeplex.com/documentation ?????????????????。Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesAscend 3D: Ascend (2013-07-17): Ascend 2.1 Animation bugfixes and speed/robustness improvements Added compatibility for multiple top-level bones on armatures Ascend Blender Exporter 2.1 Removed need to create parent-child relationship between armature and skinning mesh Added support for exporting multiple top-level bones on armatures AscendViewer 1.0.4 Updated to use latest version of Ascend libraryCommunity TFS Build Extensions: July 2013: The July 2013 release contains VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 The Community TFS Build Manager can also be found in the Visual Studio Gallery here where updates will first become available. A version supporting VS2010 is also available in the Gallery here.smoketestgit1: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesthg: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesttfs: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingdatajs - JavaScript Library for data-centric web applications: datajs version 1.1.1: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes ...Orchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0Media Companion: Media Companion MC3.573b: XBMC Link - Let MC update your XBMC Library Fixes in place, Enjoy the XBMC Link function Well, Phil's been busy in the background, and come up with a Great new feature for Media Companion. Currently only implemented for movies. Once we're happy that's working with no issues, we'll extend the functionality to include TV shows. All the help for this is build into the application. Go to General Preferences - XBMC Link for details. Help us make it better* Currently only tested on local and ...Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...New Projects[C#]A Complete MySQL Manager Project: An open-source MySQL database manager oriented to multi-threading environments, supporting to create an entire database from scratch and edit it.bCal Javascript Calendar Generator: Using this simple javascript library, you can create simple navigable calendars in your website with just one line of user code.Better Network: Tools to delete network profile and network lists in Windows 8.C# Intellisense for Notepad++: 'CS-Script Intellisense' is a real C# intellisense solution based on CS-Script and ICSharpCode.NRefactory/Mono.Cecil.DARF: Dynamic Application Runtime Framework: A set of tools and libraries which help software developers create fully distributed and component-based applications. FeiXinV5: Just TestFollower Catcher: Game made for the 2012 Windows 8 Megathon. Second place in Madrid's local competition.MeeOS .NET: El primer sistema operativo GNU GPL en español.Music Vault: Music Vault is program to store your favorite music in one place, you can in any time move all your music library by moving only one file.MVC uTorrent: This is an ASP.NET MVC uTorrent front end, that utilises the uTorrent API. The main features: - Single page application - Can be hosted in IISMyTaskManager: This project Related to distribute the task to log in users OO2RDF: OO2RDF is a framework for data triplification. P(y)rotein Subcellular Location Image Database: A library that allows communication with an OMERO database, an open-source software for storage and manipulation of biological images.PE Toolbox: PE Toolbox ? ????? ?? ???????????, ????? ??????? ??? ??????????? ?? ?????. ???? ????? ??????????? ???????: - PE IFE (Importing from Excel) PeGscan: *PeGscan*ScriptZilla: ScriptZilla is an Free Editor for Windows with more Than 23 Languages.Sharepoint : Reconciliate SSRS Reports and Datasets with datasources Powershell: Powershell to link datasources to SSRS reports and datasetsSync-Moped: The idea behind the "Sync-Moped" is to synchronize two directories with backup functionality.System.Time: System.Time provides a type for .NET that allows the managing of a time in System.String, System.DateTime and System.TimeSpan form at the same time.Text Analytics: Project summary will be added later.Tibia Universal MC .NET: This tool was written as an attempt to port the [url:http://code.google.com/p/xenomc/] to the .NET Framework.VRFramework: VR Framework is a system for creating a virtual reality games using your PC, and Android smart phone, and Unity 3D.

    Read the article

  • Linux DNS Multi tenant

    - by spicyramen
    I need to setup a multi-tenant DNS solution in Linux DNS Server. Currently I serve multiple companies: Company ABC, Company XYZ, etc... I need to create a) Forwarder zone b) Reverse Forward Zone. I can easily create a Forward Zone with domain abc.com The challenge I have is that each of my customer components share the same IP address. Hence If I create the Reverse Forward Zone I end up with something like this: abc.com 1.1.1.1 host.abc.com xyz.com 1.1.1.1 host.xyz.com If I perform a reverse lookup on host.abc.com it works fine...but if I do a reverse lookup on 1.1.1.1 I get a load balance response of: attempt: host.abc.com attempt: host.xyz.com attempt: host.abc.com Any ideas? I want to add logic to the DNS configuration to handle DNS reverse lookup based on source machine and respond with right hostname. Workaround: Create multiple DNS but this is not scalable.

    Read the article

  • Automated Deployment of Windows Application

    - by Phillip Roux
    Our development team are looking to automate our application deployment on to multiple servers. We need to control multiple windows servers at once (Stopping Database server & Web Server). We have a ASP.NET project, database upgrade scripts, reports and various Windows services that potentially need to be updated during deployment. Currently we use Jenkins as our CI server to run our unit tests. If something goes wrong in the deployment process, we would like the ability to roll back. What tools are recommended to automate our deployment process?

    Read the article

  • Triggers, Service Broker, CDC or Change Tracking?

    - by Derek D.
    When one trigger inserts into a table and that table also contains a trigger, this is a “nested trigger”. The reason that nested triggers are a concern is because the first call that performs the initial insert does not return until the last trigger in sequence is complete. In trying to circumvent this [...]

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • Is duck typing a subset of polymorphism

    - by Raynos
    From Polymorphism on WIkipedia In computer science, polymorphism is a programming language feature that allows values of different data types to be handled using a uniform interface. From duck typing on Wikipedia In computer programming with object-oriented programming languages, duck typing is a style of dynamic typing in which an object's current set of methods and properties determines the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface. My interpretation is that based on duck typing, the objects methods/properties determine the valid semantics. Meaning that the objects current shape determines the interface it upholds. From polymorphism you can say a function is polymorphic if it accepts multiple different data types as long as they uphold an interface. So if a function can duck type, it can accept multiple different data types and operate on them as long as those data types have the correct methods/properties and thus uphold the interface. (Usage of the term interface is meant not as a code construct but more as a descriptive, documenting construct) What is the correct relationship between ducktyping and polymorphism ? If a language can duck type, does it mean it can do polymorphism ?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • SQL SERVER – Disk Space Monitoring – Detecting Low Disk Space on Server

    - by Pinal Dave
    A very common question I often receive is how to detect if the disk space is running low on SQL Server. There are two different ways to do the same. I personally prefer method 2 as that is very easy to use and I can use it creatively along with database name. Method 1: EXEC MASTER..xp_fixeddrives GO Above query will return us two columns, drive name and MB free. If we want to use this data in our query, we will have to create a temporary table and insert the data from this stored procedure into the temporary table and use it. Method 2: SELECT DISTINCT dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will give us three columns: drive logical name, drive letter and free space in MB. We can further modify above query to also include database name in the query as well. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO This will give us additional data about which database is placed on which drive. If you see a database name multiple times, it is because your database has multiple files and they are on different drives. You can modify above query one more time to even include the details of actual file location. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, mf.physical_name PhysicalFileLocation, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will now additionally include the physical file location as well. As I mentioned earlier, I prefer method 2 as I can creatively use it as per the business need. Let me know which method are you using in your production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer

    - by pinaldave
    Incredibly, SQL Server has so much information to share with us. Every single day, I am amazed with this SQL Server technology. Sometimes I find several interesting information by just querying few of the DMV. And when I present this info in front of my client during performance tuning consultancy, they are surprised with my findings. Today, I am going to share one of the hidden gems of DMV with you, the one which I frequently use to understand what’s going on under the hood of SQL Server. SQL Server keeps the record of most of the operations of the Query Optimizer. We can learn many interesting details about the optimizer which can be utilized to improve the performance of server. SELECT * FROM sys.dm_exec_query_optimizer_info WHERE counter IN ('optimizations', 'elapsed time','final cost', 'insert stmt','delete stmt','update stmt', 'merge stmt','contains subquery','tables', 'hints','order hint','join hint', 'view reference','remote query','maximum DOP', 'maximum recursion level','indexed views loaded', 'indexed views matched','indexed views used', 'indexed views updated','dynamic cursor request', 'fast forward cursor request') All occurrence values are cumulative and are set to 0 at system restart. All values for value fields are set to NULL at system restart. I have removed a few of the internal counters from the script above, and kept only documented details. Let us check the result of the above query. As you can see, there is so much vital information that is revealed in above query. I can easily say so many things about how many times Optimizer was triggered and what the average time taken by it to optimize my queries was. Additionally, I can also determine how many times update, insert or delete statements were optimized. I was able to quickly figure out that my client was overusing the Query Hints using this dynamic management view. If you have been reading my blog, I am sure you are aware of my series related to SQL Server Views SQL SERVER – The Limitations of the Views – Eleven and more…. With this, I can take a quick look and figure out how many times Views were used in various solutions within the query. Moreover, you can easily know what fraction of the optimizations has been involved in tuning server. For example, the following query would tell me, in total optimizations, what the fraction of time View was “reference“. As this View also includes system Views and DMVs, the number is a bit higher on my machine. SELECT (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'view reference') / (SELECT CAST (occurrence AS FLOAT) FROM sys.dm_exec_query_optimizer_info WHERE counter = 'optimizations') AS ViewReferencedFraction Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • MySql Replication with a star topology

    - by Riotopsys
    My company currently operates in 3 separate locations connected by slow vpn links. Each site hosts a dedicated MySql server. I need to aggregate the data from all three of them onto a single server for corporate reporting. The powers that be have stated I cannot use circular replication or federated tables. Is there a third party tool for MySql that can replicate from multiple masters? Basically the diagram would be a daisy with the reporting server slave at center with multiple replication connections coming in from the master sites on the petals.

    Read the article

  • Can't play stream from TorrentFlux stream

    - by thegreyspot
    Hi I am trying to stream a video from my TorrentFlux-b4rt server. I tried multiple media players, none work. Only VLC was able to produce an error message: input can't be opened: VLC is unable to open the MRL 'mms://...:8080/'. Check the log for details. I have tried multiple computers on different networks and all have the same issue. I am using Windows 7 to play the videos, and the server is Torrentflux-b4rt 1.0-beta2 with ubuntu 9.10. Thank you.

    Read the article

  • Component Development within SOA

    How do the concepts of component development work within SOA? Let’s first break this question down by defining what component development is. Component development is the process of implementing specific functionality in the form of small units of complied code that can be reused across multiple applications or product families. Typically, components are integrated with other components forming composite components. In general, most interaction between components is done through interfaces to promote loose coupling. The concept of loose coupling refers to the interconnections of components in a system so that their component dependences based on contracts defined by interfaces. A real life example of this can be experienced while using Legos to build a structure. If we consider each Lego block as a component, then when two more Legos are connected they form a composite component due to the fact that the structure is made up of multiple components.   It is important to note that composite components can be made from standard components and other composite components. Eventually as various components and composite components become interconnected a structure begins to form in the shape of an application or in the case of Legos in the form of Lego structure. Software components can loosely be defined as small units of related implemented functionality that can communicate with other components or may have dependencies on other components. Based on the definitions provide above, it is my personal opinion that SOA works well with the concepts of component development. The SOA architectural style focuses on creating loosely coupled services. Each service much like a component offers related functionality that can be accessed by various requesting clients.  In addition services can be derived just like components in that services can be built on other services to form composite services. In summary, the concepts of component development can work within SOA based on the example above.

    Read the article

  • Picking a code review tool

    - by marcog
    We are a startup looking to migrate from Fogbugz/Kiln to a new issue tracker/code review system. We are very happy with Jira, especially the configurability, but we are undecided on a code review tool. We have been trialing Bitbucket, but it doesn't fit our workflow well. Here are the problems we have identified with BB: Comments can be hard to find: when commenting on code not visible in the diff when code that is commented on is later changed viewing the full file doesn't include comments (also doesn't show changes) Viewing comments on individual commits can be a pain We have the implementer merge the diff and close the issue, whereas pull requests are more suited to the open source model where someone with commit rights merges We would like to automate creation of the code review (either from Jira or a command line tool) No syntax highlighting Once the pull request exceeds a certain size, BB won't show the whole thing and you have to view individual commits Linking BB pull requests to Jira issues is a bit janky: we have a pull request URL field on Jira, but this doesn't work when there are changes in multiple repositories Does anyone have any good suggestion given the above? We are tight on budget, and Jira integration is a big plus. We also have multiple commits per issue, and would like to have the option of viewing individual commits in the review. It might also be worth noting that we have a separate reviewer and tester for each issue.

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • guvcview recording video and audio out of synchronisation in Ubuntu 10.10

    - by SIJAR
    I finally got Guvcview, a great software for Logitech webcam and it does all the stuff that one wants out of it. But I'm not satisfy with the video recording, video and audio out of synchronisation also video seems to be in slow motion. Please help so that I can tweak in and get a good video recording with the webcam. Below is the log of Guvcview ------------------------------------------------------------------------------- guvcview 1.4.1 video_device: /dev/video0 vid_sleep: 0 cap_meth: 1 resolution: 640 x 480 windowsize: 1024 x 715 vert pane: 578 spin behavior: 0 mode: mjpg fps: 1/25 Display Fps: 0 bpp: 0 hwaccel: 1 avi_format: 4 sound: 1 sound Device: 4 sound samp rate: 0 sound Channels: 0 Sound delay: 0 nanosec Sound Format: 85 Pan Step: 2 degrees Tilt Step: 2 degrees Video Filter Flags: 0 image inc: 0 profile(default):/home/sijar/default.gpfl starting portaudio... bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started language catalog= dir:/usr/share/locale type:UTF-8 lang:en_US.utf8 cat:guvcview.mo mjpg: setting format to 1196444237 capture method = 1 video device: /dev/video0 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! /dev/video0 - device 1 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! Init. UVC Camera (046d:0825) (location: usb-0000:00:1d.7-5) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 176 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 432, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 544, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 640, height = 360 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, ... repeats a couple of times ... vid:046d pid:0825 driver:uvcvideo Adding control for Pan (relative) UVCIOC_CTRL_ADD - Error: Operation not permitted checking format: 1196444237 VIDIOC_G_COMP:: Invalid argument compression control not supported fps is set to 1/25 drawing controls control[0]: 0x980900 Brightness, 0:255:1, default 128 control[0]: 0x980901 Contrast, 0:255:1, default 32 control[0]: 0x980902 Saturation, 0:255:1, default 32 control[0]: 0x98090c White Balance Temperature, Auto, 0:1:1, default 1 control[0]: 0x980913 Gain, 0:255:1, default 0 control[0]: 0x980918 Power Line Frequency, 0:2:1, default 2 control[0]: 0x98091a White Balance Temperature, 0:10000:10, default 4000 control[0]: 0x98091b Sharpness, 0:255:1, default 24 control[0]: 0x98091c Backlight Compensation, 0:1:1, default 1 control[0]: 0x9a0901 Exposure, Auto, 0:3:1, default 3 control[0]: 0x9a0902 Exposure (Absolute), 1:10000:1, default 166 control[0]: 0x9a0903 Exposure, Auto Priority, 0:1:1, default 0 resolutions of format(2) = 19 frame rates of 1º resolution=6 Def. Res: 0 numb. fps:6 --------------------------------------- device #0 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 (hw:0,0) Host API = ALSA Max inputs = 2, Max outputs = 2 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #1 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC ADC (hw:0,1) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #2 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC2 ADC (hw:0,2) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #3 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - ADC2 (hw:0,3) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #4 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - IEC958 (hw:0,4) Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #5 Name = USB Device 0x46d:0x825: USB Audio (hw:1,0) Host API = ALSA Max inputs = 1, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #6 Name = front Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.012 Def. high input latency = -1.000 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #7 Name = iec958 Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #8 Name = spdif Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #9 Name = pulse Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #10 Name = dmix Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.043 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #11 [ Default Input, Default Output ] Name = default Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 ---------------------------------------------- SampleRate:0 Channels:0 Video driver: x11 A window manager is available VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25371756K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cbd8b0]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cbd8b0]profile Baseline, level 3.0 [libx264 @ 0x8cbd8b0]non-strictly-monotonic PTS shift sound by -9 ms shift sound by -9 ms shift sound by -9 ms AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... AUDIO: droping audio data (/home/sijar/Videos/Webcam) 25371748K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... Cap Video toggled: 0 Shuting Down IO Thread AUDIO: droping audio data stop= 4426644744000 start=4416533023000 VIDEO: 146 frames in 10111.000000 ms = 14.439719 fps Stoping audio stream Closing audio stream... close avi Last message repeated 145 times [libx264 @ 0x8cbd8b0]frame I:2 Avg QP:14.10 size: 24492 [libx264 @ 0x8cbd8b0]frame P:103 Avg QP:16.06 size: 20715 [libx264 @ 0x8cbd8b0]mb I I16..4: 48.4% 0.0% 51.6% [libx264 @ 0x8cbd8b0]mb P I16..4: 57.5% 0.0% 0.0% P16..4: 40.2% 0.0% 0.0% 0.0% 0.0% skip: 2.3% [libx264 @ 0x8cbd8b0]final ratefactor: 62.05 [libx264 @ 0x8cbd8b0]coded y,uvDC,uvAC intra: 79.7% 92.2% 68.4% inter: 62.4% 87.5% 48.0% [libx264 @ 0x8cbd8b0]i16 v,h,dc,p: 23% 17% 41% 19% [libx264 @ 0x8cbd8b0]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 30% 24% 26% 2% 5% 3% 3% 3% 4% [libx264 @ 0x8cbd8b0]i8c dc,h,v,p: 53% 20% 23% 4% [libx264 @ 0x8cbd8b0]ref P L0: 63.0% 37.0% [libx264 @ 0x8cbd8b0]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25379744K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cfba20]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cfba20]profile Baseline, level 3.0 [libx264 @ 0x8cfba20]non-strictly-monotonic PTS shift sound by -236 ms shift sound by -236 ms shift sound by -236 ms (/home/sijar/Videos/Webcam) 25377044K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25373408K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25370696K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25367680K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25364052K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25360312K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25356628K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25352908K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25349316K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25345552K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25341828K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25338092K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25334412K bytes free on a total of 39908968K (used: 36 %) treshold=51200K Cap Video toggled: 0 Shuting Down IO Thread stop= 4708817235000 start=4578624714000 VIDEO: 1604 frames in 130192.000000 ms = 12.320265 fps Stoping audio stream Closing audio stream... close avi Last message repeated 1603 times [libx264 @ 0x8cfba20]frame I:16 Avg QP:14.78 size: 42627 [libx264 @ 0x8cfba20]frame P:1547 Avg QP:16.44 size: 28599 [libx264 @ 0x8cfba20]mb I I16..4: 21.6% 0.0% 78.4% [libx264 @ 0x8cfba20]mb P I16..4: 28.1% 0.0% 0.0% P16..4: 70.5% 0.0% 0.0% 0.0% 0.0% skip: 1.4% [libx264 @ 0x8cfba20]final ratefactor: 88.17 [libx264 @ 0x8cfba20]coded y,uvDC,uvAC intra: 74.4% 95.8% 83.2% inter: 75.2% 94.6% 69.2% [libx264 @ 0x8cfba20]i16 v,h,dc,p: 27% 17% 40% 16% [libx264 @ 0x8cfba20]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 25% 21% 3% 6% 4% 5% 4% 7% [libx264 @ 0x8cfba20]i8c dc,h,v,p: 61% 18% 18% 4% [libx264 @ 0x8cfba20]ref P L0: 64.0% 36.0% [libx264 @ 0x8cfba20]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Shuting Down Thread Thread terminated... cleaning Thread allocations: 100% SDL Quit Video Thread finished write /home/sijar/.guvcviewrc OK free audio mutex closed v4l2 strutures free controls free controls - vidState cleaned allocations - 100% Closing portaudio ...OK Closing GTK... OK

    Read the article

  • Google+ Hangouts API v1.2

    Google+ Hangouts API v1.2 We just launched v1.2 of the Hangouts API. Join Jonathan Beri and Jenny Murphy as they discuss the improvements and new features included in this release. After that, they'll answer your questions about the Hangouts API. 0:44 - Introductions 2:04 - What's new in Hangouts API v1.2 - developers.google.com 7:39 - Why can't I use the same URL for multiple ImageResources? 12:20 - The YouTube live ID in the Hangouts API 13:59 - Does onYouTubeLiveIdReady fire when new participants join? 15:10 - Can the 18+ flag be exposed in the Hangouts API? 15:50 - Can I use the share button or +1 button to target my Hangout App? 18:20 - When will Google+ pages be able to launch apps in their hangouts? 19:00 - Allen has been using the history API to log use of his Hangout Apps. 19:51 - Will this hangout be archived? - Google+ Platform YouTube playlist: www.youtube.com 20:20 - Is there a way for a user to remove a plugin from their hangout? 21:44 - Why is the self view in hangouts mirrored? 23:45 - Can hangouts support multiple cameras and control them via the API? Can take snapshots? 26:37 - It would be really cool if the hangout button could specify the invitation list. - Google+ issue tracker: code.google.com 28:40 - Can the REST API expose hangout metadata? From: GoogleDevelopers Views: 1350 43 ratings Time: 31:35 More in Science & Technology

    Read the article

  • pfsense multi-site VPN VOIP deployment

    - by sysconfig
    have main office pfsense firewall configured like this: local networks WAN - internet LAN - local network VOIP - IP phones need to connect remote offices (multi-users) and single remote users (from home) use IPSEC or OpenVPN to build "permanent" automatically connecting tunnels from remote location to main location. in remote locations, network will look like this: WAN - internet LAN - local network multiple users VOIP - multiple IP phones in order for the IP phones to work they have to be able to "see" the VOIP network and the VOIP server back at the main office for single remote users ( like from home ) the setup will be similar but only one phone and one computer so questions: best way to tie networks together? IPSEC or OpenVPN can this be setup to automatically connect ? any issues/suggestions with that design/topology ? QoS or issues with running the VOIP traffic over a VPN throughput, quality etc.. obviously depends on remote locations connection to some degree

    Read the article

< Previous Page | 418 419 420 421 422 423 424 425 426 427 428 429  | Next Page >