Search Results

Search found 34195 results on 1368 pages for 'try'.

Page 602/1368 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • Remove a Digital Camera’s IR Filter for IR Photography on the Cheap

    - by Jason Fitzpatrick
    Whether you have a DSLR or a point-and-shoot, this simple hack allows you to shoot awesome IR photographs without the expense of a high-quality IR filter (or the accompanying loss of light that comes with using it). How does it work? You’ll need to take apart your camera and remove a single fragile layer of IR blocking glass from the CCD inside the camera body. After doing so, you’ll have a camera that sees infrared light by default, no special add-on filters necessary. Because it sees the IR light without the filters you’ll also skip out on the light loss that occurs with the addition of the add-on IR filter. The downside? You’re altering the camera in permanent and warranty-voiding way. This is most definitely not a hack for your brand new $2,000 DSLR, but it is a really fun hack to try out on an old point and shoot camera or your circa-2004 depreciated DSLR. Hit up the link below to see the process performed on an old Canon point and shoot–we’d strongly recommend searching for a break down guide for your specific camera model before attempting the trick on your own gear. Are You Brave Enough to IR-ize Your Camera [DIY Photography] HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

  • What's your experience with female programmers?

    - by Rachel
    Let me start by saying I'm female, but every single other female programmer I've known has been pretty terrible. The extent of their knowledge seems to be copy/paste and modify some values. Quite often they don't even try to learn new concepts, or understand what they're doing. I'm not saying good female programmers aren't out there, just that the ratio of good/bad programmers seems much worse then males. Perhaps its because everyone feels they have to give female programmers a chance to prove they are not biased? Or is this just me?? What has your experiences been with them? UPDATE: Just want to say thanks for all the responses. I've learned some interesting things and am happy to know that female programmers have such support :) My experience has been very limited with them but all bad, and I agree that it is probably due to my small sample size (around 5). I wasn't trying to be sexist with such a question, I just wanted to find out if it was really that abnormal to be a female programmer. I'm abnormal about a lot of things you'd expect from a female... I play video games in most of my spare time, I liked Math so much I completed my entire math book during christmas break one year (What can I say, I found the subject interesting), I'm not very social, I dislike shopping, I only have 2 pairs of shoes, my significant other doesn't work but does all the housework/laundry/etc... but anyways, thanks :)

    Read the article

  • How do I fix "bzr: ERROR: Unable to determine your name. "?

    - by Daniel
    I am trying to quickly create my first app and am getting gtk errors when I try to run or create an application. Here is a copy of what I executed and what results I got: daniel@laptop:~/PyDevelopment$ quickly create ubuntu-application app001 Creating project directory app001 Creating bzr repository and committing Launching your newly created project! /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `App001Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) Congrats, your new project is setup! cd /home/daniel/PyDevelopment/app001/ to start hacking. daniel@laptop:~/PyDevelopment$ cd app001 daniel@laptop:~/PyDevelopment/app001$ quickly design daniel@laptop:~/PyDevelopment/app001$ quickly rub ERROR: No rub command found in template ubuntu-application. Candidate commands are: add, commands, configure, create, debug, design, edit, getstarted, help, license, package, quickly, release, run, save, share, submitubuntu, test, tutorial, upgrade daniel@laptop:~/PyDevelopment/app001$ quickly run /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) /usr/lib/python2.7/dist-packages/gi/overrides/Gtk.py:391: Warning: g_object_set_property: construct property "type" for object `App001Window' can't be set after construction Gtk.Window.__init__(self, type=type, **kwds) daniel@laptop:~/PyDevelopment/app001$ quickly package .......Ubuntu packaging created in debian/ ....... ---------------------------------- Command returned some ERRORS: ---------------------------------- bzr: ERROR: Unable to determine your name. ---------------------------------- ERROR: can't create or update ubuntu package ERROR: package command failed Aborting

    Read the article

  • puppet rspec no such file to load -- rspec-puppet (LoadError)

    - by Vorsprung
    I have no prior experience at all of ruby. I am not interested in ruby (and so have no knowledge of rails etc) as such but am using puppet to manage a group of servers. I have written some modules and the rspec-puppet system looks like it would be very useful. However, I cannot get rspec-puppet to work I am using Ubuntu LTS 10.04 I have installed puppet rspec using the directions on their web page What I actually did apt-get install rubygems # (installs 1.8) gem install rspec-expectations gem install rspec-puppet I also installed librspec-ruby1.8 Then I ran rspec-puppet-init in a puppet module directory I'd already made (it's a working puppet module) I made a file as defined in the tutorial $ more spec/defines/rule_spec.rb require 'spec_helper' describe 'vanusers::rule' do let(:title) { 't1' } it { should contain_class('vanusers::JamieA') } end but when I try and run it there is a mysterious dependancy issue $ spec spec/defines/rule_spec.rb /home/jamie/git/puppet/modules/vanusers/spec/spec_helper.rb:1:in `require': no such file to load -- rspec-puppet (LoadError) from /home/jamie/git/puppet/modules/vanusers/spec/spec_helper.rb:1 from ./spec/defines/rule_spec.rb:1:in `require' from ./spec/defines/rule_spec.rb:1 from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:15:in `load' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:15:in `load_files' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:14:in `each' from /usr/lib/ruby/1.8/spec/runner/example_group_runner.rb:14:in `load_files' from /usr/lib/ruby/1.8/spec/runner/options.rb:132:in `run_examples' from /usr/lib/ruby/1.8/spec/runner/command_line.rb:9:in `run' from /usr/bin/spec:3 Here is the solution I came up with in the end:: apt-get install rubygems gem install rspec-expectations rspec-puppet puppet-lint puppetlabs_spec_helper so your path picks up the gem stuff export PATH=/var/lib/gems/1.8/bin:$PATH cd into module and rm spec/spec_helper.rb rspec-puppet-init replace Rakefile with require 'rake' require 'rspec/core/rake_task' require 'puppetlabs_spec_helper/rake_tasks' Then "rake spec" to run tests or "rake lint" to check files http://sysadvent.blogspot.co.uk/2013/12/day-22-getting-started-testing-your.html was an excellent source of info

    Read the article

  • What's brewing in the world of Java? (Dec 22nd 2010)

    - by Jacob Lehrbaum
    The nights are getting darker, the email traffic seems to be getting lighter and the holiday season feels like its right around the corner - but the world of Java is still as active as ever and shows no signs of taking a break!  Let's take a look at everything that has been brewing over the past couple of weeks:Product Updates and ResourcesJCP Approves JSRs for Java SE 7, Java SE 8, Project Coin and Lambda (read more)Java SE Update 23 Released, delivers improved performance and enhanced support for right-left languages. (read more or download)New Tutorial: JDK 7 Support in NetBeans IDE 7.0Java EE 6 and Glassfish 3.0 have celebrated their respective one year anniversaries!  (read more) So naturally, it's time to start talking about Java EE 7 (read more)WebcastsOn Demand: Developing Rich Clients for the Enterprise with the JavaFX Composer, Part 1Coming soon: Smarter Devices with Oracle's Embedded Java SolutionsPodcastsJava Spotlight Podcast Episode 7: Interview with Adam Messinger, Vice President of Java Development on Java One Brazil, Java SE Development, OpenJDK, JavaFX 2.0 and more!  The NetBeans team released Episode 53 of the NetBeans Podcast series on December 3rd marking the first episode in nearly 12 months.  Sign of things to come?Community and EventsJavaOne was held for the first time in Brazil this year, and by all accounts it was a great success!  Read more about this exciting first in the following posts from Tori Wieldt (JavaOne Latin America Underway) and Janice Heiss (JavaOne in Brazil)JavaOne was also held in Bejing for the first time last week and was also a huge success. Will try to include coverage of this event in the near futureArticles and InterviewsAn update on JavaServer Faces with Oracle's Ed Burns (read more)Interview with Java Champion Matjaz B. Juric on Cloud Computing, SOA, and Java EE 6 (read more)The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going (read more)Oracle MagazineThe latest issue of Oracle Magazine is up and in what will hopefully be a sign of the future, it includes a number of columns and articles on Java.  First is an editorial from Editor-in-Chief Tom Haunert who shares some insight into the long-standing relationship that Oracle has had with Java. Next up is a Oracle Technology Network Chief Justin Kestelyn's Community Bulletin entitled: Java Evolves.  And finally, Java Champion Adam Bien's feature on Java EE 6: Simplicity by DesignEnjoy!

    Read the article

  • 2D grid with multiple types of objects

    - by Alexandre P. Levasseur
    This is my first post here in programmers.stackexchange (I'm a regular on SO). I hope this isn't too general. I'm trying a simple project to learn Java from something I've seen done in the past. Basically, it's an AI simulation where there are herbivorous and carnivorous creatures and both must try to survive. The part I am trying to come up with is that of the board itself. Let's assume very simple rules. The board must be of size X by Y and only one element can be in one place at one time. For example, a critter cannot be in the same tile as a food block. There can be obstacles (rocks, trees..), there can be food, there can be critters of any type. Assuming these rules, what would be one good way to represent this situation ? This is what I came up with and want suggestions if possible: Use multiple levels of inheritance to represent all the different possible objects (AbstractObject - (NonMovingObject - (Food, Obstacle) , MovingObject - Critter - (Carnivorous, Herbivorous))) and use polymorphism in a 2D array to store the instances and still have access to lower level methods. Many thanks. Edit: Here is the graphic representation of the structure I have in mind.

    Read the article

  • Twitter Feeds in Umbraco using XSLT

    - by Vizioz Limited
    There are currently two packages tagged on the Umbraco forum that can be used to add a twitter feed to your website. I was playing around with "Twitter for Umbraco" by Warren Buckley and noticed a bug in the way it converted twitter @names to links, so I thought I would try and solve this using XSLT.It may also be useful for those of you using Darren Ferguson's "Feed Cache" package as the demo on Darren's site does not add links to the tweets.To use this XSLT you simple call the XSLT Template passing in your Twitter message:<xsl:call-template name="formaturl"> <xsl:with-param name="url" select="text"/></xsl:call-template>Then add the XSLT template to your XSLT macro (outside of the main template)<xsl:template name="formaturl"> <xsl:param name="twitterfeed"/> <xsl:variable name="transform-http" select="Exslt.ExsltRegularExpressions:replace($twitterfeed, '(http\:\/\/\S+)',ig,'<a href="$1">$1</a>')"/> <xsl:variable name="transform-https" select="Exslt.ExsltRegularExpressions:replace($transform-http, '(HTTps\:\/\/\S+)',ig,'<a href="$1">$1</a>')"/> <xsl:variable name="transform-AT" select="Exslt.ExsltRegularExpressions:replace($transform-https, '(^|\s)@(\w+)',ig,' <a href="http://www.twitter.com/$2">@$2</a>')"/> <xsl:variable name="transform-HASH" select="Exslt.ExsltRegularExpressions:replace($transform-AT, '(^|\s)#(\w+)',ig,' <a href="http://www.twitter.com/search?q=$2">#$2</a>')"/> <xsl:value-of select="$transform-HASH" disable-output-escaping="yes"/> </xsl:template>You should find that this now replaces all the @names, #names and URL's with links!

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – Fix – Agent Starting Error 15281 – SQL Server blocked access to procedure ‘dbo.sp_get_sqlagent_properties’ of component ‘Agent XPs’ because this component is turned off as part of the security configuration for this server

    - by Pinal Dave
    SQL Server Agent fails to start because of the error 15281 is a very common error. When you start to restart SQL Agent sometimes it will give following error. SQL Server blocked access to procedure ‘dbo.sp_get_sqlagent_properties’ of component ‘Agent XPs’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Agent XPs’ by using sp_configure. For more information about enabling ‘Agent XPs’, search for ‘Agent XPs’ in SQL Server Books Online. (Microsoft SQL Server, Error: 15281) To resolve this error, following script has to be executed on the server. sp_configure 'show advanced options', 1; GO RECONFIGURE; GO sp_configure 'Agent XPs', 1; GO RECONFIGURE GO When you run above script, it will give a very similar output as following on the screen. Now, if you try to restart SQL Agent it will just work fine. That’s it! Sometimes there is a simpler solution to complicated error. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Server Agent

    Read the article

  • Free Webinar Featuring Oracle Spatial and MapViewer, Oracle Business Intelligence, and Oracle Utilities

    - by stephen.garth
    Maps, BI and Network Management: Together At Last Date: Thursday, January 20 | Time: 11:00 a.m. PDT | 2:00 p.m. EDT | Duration: 1 hour Cost: FREE For years, utilities have wrestled with the challenge of providing executive management and other decision makers with maps and business intelligence during outages without compromising the performance of their real-time network operations and control systems. Join experts from Directions Media, Oracle and ThinkHuddle in this webinar for a discussion on how Oracle has addressed this challenge by incorporating Oracle Spatial data and the dashboard capabilities of Oracle Business Intelligence into a new application, Oracle Utilities Advanced Spatial Outage Analytics. Jim Steiner, Vice President of Spatial Product Management at Oracle, will provide an overview of Oracle's spatial and location technology, including Oracle Spatial 11g and Oracle Fusion Middleware MapViewer, and describe how Oracle is using this technology to spatially-enable many of its own enterprise applications. Brad Williams, Vice President of Oracle Utilities, will describe why and how the company developed Oracle Utilities Advanced Spatial Outage Analytics, how it works with Oracle Utilities Network Management System, and how this can deliver improved decision support and operational benefits to utilities. Steve Pierce, Spatial Systems Consultant with ThinkHuddle, will discuss architectural aspects and best practices in the integration of Oracle's spatial and BI technology. Following the presentation, attendees will have an opportunity to engage the panelists in a live Q&A session. Who Should Attend Executives, decision makers and analysts from IT, customer service, operations, engineering and marketing - especially in utilities, but also any business where location is important. Don't miss this webinar - Register Now. Find out more: Oracle Spatial on oracle.com More technical information on Oracle Technology Network Information on Oracle Utilities applications var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Messing with the Team

    - by Robert May
    Good Product Owners will help the team be the best that they can be.  Bad product owners will mess with the team and won’t care about the team.  If you’re a product owner, seek to do good and avoid bad behavior at all costs.  Remember, this is for YOUR benefit and you have much power given to you.  Use that power wisely. Scope Creep The product owner has several tools at his disposal to inject scope into an iteration.  First, the product owner can use defects to inject scope.  To do this, they’ll tell the team what functionality that they want to see in a feature.  Then, after the feature is developed, the Product Owner will decide that they don’t really like how the functionality behaves.  To change it, rather than creating a new story, they’ll add a defect.  The functionality is correct, as designed, but the Product Owner doesn’t like it.  By creating the defect, the Product Owner destroys the trust that the team has of the product owner.  They may not be able to count the story, because the Product Owner changed the story in the iteration, and the team then ends up looking like they have low velocity for something over which they have no control.  This is bad.  One way to deal with this is to add “Product Owner Time” to the iteration.  This will slow the velocity, but then the ScrumMaster can tell stake holders that this time is strictly in place to deal with bad behavior of the Product Owner. Another mechanism often used to inject Scope is the concept of directed development.  Outside of planning, stand-ups, or any other meeting, the Product Owner will take a developer aside and ask them to complete a task for them.  This is bad!  The team should be allocating all of their time to development.  If the Product Owner asks for a favor, then time that would normally be used for development will be used for a pet project of the Product Owner and the team will not get credit for this work.  Selfish product owners do this, and I typically see people who were “managers” do this behavior.  Authoritarian command and control development environments also see this happen.  The best thing that can happen is for the team member to report the issue to the ScrumMaster and the ScrumMaster to get very aggressive with management and the Product Owner to try and stop the behavior.  This may result in the ScrumMaster being fired, but if the behavior continues, Scrum is doomed.  This problem is especially bad in cases where the team member’s direct supervisor is the Product Owner.  I don’t recommend that the Product Owner or ScrumMaster have a direct report relationship with team members, since team members need the ability to say no.  To work around this issue, team members need to say no.  If that fails, team members need to add extra time to the iteration to deal with the scope creep injection and accept the lower velocity. As discussed above, another mechanism for injecting scope is by changing acceptance tests after the work is complete.  This is similar to adding defects to change scope and is bad.  To get around, add time for Product Owner uncertainty to the iteration and make sure that stakeholders are aware of the need to add this time because of the Product Owner. Refusing to Prioritize Refusing to prioritize causes chaos for the team.  From the team’s perspective, things that are not important will be worked on while things that the team knows are vital will be ignored.  A poor Product Owner will often pick the stories for the iteration on a whim.  This leads to the team working on many different aspects of the product and results in a lower velocity, since each iteration the team must switch context to the new area of development. The team will also experience confusion about priorities.  In one iteration, Feature X was the highest priority and had to be done.  Then, the following iteration, even though parts of Feature X still need to be completed, no stories to address them will be in the iteration.  However, three iterations later, Feature X will again become high priority. This will cause the team to not trust the Product Owner, and eventually, they’ll stop caring about the features they implement.  They won’t know what is important, so to insulate themselves from the ever changing chaos, they’ll become apathetic to all features.  Team members are some of the most creative people in a company.  By losing their engagement, the company is going to have a substandard product because the passion for the product won’t be in the team. Other signs that the Product Owner refuses to prioritize is that no one outside of the product owner will be consulted on priorities.  Additionally, the product, release, and iteration backlogs will be weak or non-existent. Dealing with this issue is not easy.  This really isn’t something the team can fix, short of taking over the role of Product Owner themselves.  An appeal to the stake holders might work, but only if the Product Owner isn’t a “manager” themselves.  The ScrumMaster needs to protect the team and do what they can to either get the Product Owner to prioritize or have the Product Owner replaced. Managing the Team A Product Owner that is also the “boss” of team members is a Scrum team that is waiting to fail.  If your boss tells you to do something, failing to do that something can cause you to be fired.  The team needs the ability to tell the Product Owner NO.  If the product owner introduces scope creep, the team has a responsibility to tell the Product Owner no.  If the Product Owner tries to get the team to commit to more than they can accomplish in an iteration, the team needs the ability to tell the Product Owner no. If the Product Owner is your boss and determines your pay increases, you’re probably not going to ever tell them no, and Scrum will likely fail.  The team can’t do much in this situation. Another aspect of “managing the team” that often happens is the Product Owner tries to tell the team how to develop the stories that are in the iteration.  This is one reason why I recommend that Product Owners are NOT technical people.  That way, the team can come up with the tasks that are needed to accomplish the stories and the Product Owner won’t know better.  If the Product Owner is technical, the ScrumMaster will need to take great care to protect the team from the ScrumMaster changing how the team thinks they need to implement the stories. Product Owners can also try to manage the team by their body language.  If the team says a task is going to take 6 hours to complete, and the Product Owner disagrees, they will use some kind of sour body language to indicate this disagreement.  In weak teams, this may cause the team to revise their estimate down, which will result in them taking longer than estimated and may result in them missing the iteration.  The ScrumMaster will need to make sure that the Product Owner doesn’t send such messages and that the team ignores them and estimates what they REALLY think it will take to complete the tasks.  Forcing the team to deal with such items in the retrospective can be helpful. Absenteeism The team is completely dependent upon the Product Owner to develop features for the customer.  The Product Owner IS the voice of the customer and without them, the team will lack direction.  Being the Product Owner is a full time job!  If the Product Owner cannot dedicate daily time with the team, a different product owner should be found. The Product Owner needs to attend every stand-up, planning meeting, showcase, and retrospective that the team has.  The team also must be able to have instant communication with the product owner.  They must not be required to schedule meetings to speak with their product owner.  The team must be the highest priority task that the Product Owner has. The best way to work around an absent Product Owner is to appoint a new Product Owner in the team.  This person will be responsible for making the decisions that the Product Owner should be making and to act as the liaison to the absent Product Owner.  If the delegate Product Owner doesn’t have authority to make decisions for the team, Scrum will fail.  If the Product Owner is absent, the ScrumMaster should seek to have that Product Owner replaced by someone who has the time and ability to be a real Product Owner. Making it Personal Too often Product Owners will become convinced that their ideas are the ones that matter and that anyone who disagrees is making a personal attack on them.  Remember that Product Owners will inherently be at odds with many people, simply because they have the need to prioritize.  Others will frequently question prioritization because they only see part of the picture that Product Owners face. Product Owners must have a thick skin and think egos.  If they don’t, they tend to make things personal, which causes them to become emotional and causes them to take actions that can destroy the trust that team members have in the Product Owner. If a Product Owner is making things person, the best thing that team members can do is reassure them that its not personal, but be firm about doing what is best for the Company and for the users.  The ScrumMaster should also spend significant time coaching the Product Owner on how to not react emotionally and how to accept criticism without becoming defensive. Conclusion I’m sure there are other ways that a Product Owner can mess with the team, but these are the most common that I’ve seen.  I would encourage all Product Owners to seek to be a good Product Owner.  If you find yourself behaving in any of the bad product owner ways, change your behavior today!  Your team will thank you. Remember, being Product Owner is very difficult!  Product Owner is one of the most difficult roles in Scrum.  However, it can also be one of the most rewarding roles in Scrum, since Product Owners literally see their ideas brought to life on the computer screen.  Product Owners need to be very patient, even in the face of criticism and need to be willing to make tough decisions on priority, but then not become offended when others disagree with those decisions.  Companies should spend the time needed to find the right product owners for their teams.  Doing so will only help the company to write better software. Technorati Tags: Scrum,Product Owner

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Need help with PHP web app bootstrapping error potentially related to Zend [migrated]

    - by Matt Shepherd
    I am trying to get a program called OpenFISMA running on an Ubuntu AMI in AWS. The app is not really coded on the Ubuntu platform, but I am in my comfort zone there, and have tried both CentOS and OpenSUSE (both are sort of "native" for the app) for getting it working with the same or worse results. So, why not just get it working on Ubuntu? Anyway, the app is found here: www.openfisma.org and an install guide is found here: https://openfisma.atlassian.net/wiki/display/030100/Installation+Guide The install guide kind of sucks. It doesn't list dependencies in any coherent way or provide much of any detail (does not even mention Zend once on the entire page) so I've done a lot of work to divine the information I do have. This page provided some dependency inf (though again, Zend is not mentioned once): https://openfisma.atlassian.net/wiki/display/PUBLIC/RPM+Management#RPMManagement-BasicOverviewofRPMPackages Anyway, I got all the way through the install (so far as I could reconstruct it). I am going to the login page for the first time, and there should be some sort of bootstrapping occurring when I load the page. (I am not a programmer so I have no idea what it is doing there.) Anyway, I get a message on the web page that says: "An exception occurred while bootstrapping the application." So, then I go look in /var/www/data/logs/php.log and find this message: [22-Oct-2013 17:29:18 UTC] PHP Fatal error: Uncaught exception 'Zend_Exception' with message 'No entry is registered for key 'Zend_Log'' in /var/www/library/Zend/Registry.php:147 Stack trace: #0 /var/www/public/index.php(188): Zend_Registry::get('Zend_Log') #1 {main} thrown in /var/www/library/Zend/Registry.php on line 147 This occurs every time I load the page. I gather there is an issue related to registering the Zend_Log variable in the Zend registry, but other than that I really have no idea what to do about it. Am I missing a package that it needs, or is this app not coded to register the variables properly? I have no clue. Any help is greatly appreciated. The application file referenced in the log message (index.php) is included below. <?php /** * Copyright (c) 2008 Endeavor Systems, Inc. * * This file is part of OpenFISMA. * * OpenFISMA is free software: you can redistribute it and/or modify it under the terms of the GNU General Public * License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later * version. * * OpenFISMA is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied * warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more * details. * * You should have received a copy of the GNU General Public License along with OpenFISMA. If not, see * {@link http://www.gnu.org/licenses/}. */ try { defined('APPLICATION_PATH') || define( 'APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application') ); // Define application environment defined('APPLICATION_ENV') || define( 'APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'production') ); set_include_path( APPLICATION_PATH . '/../library/Symfony/Components' . PATH_SEPARATOR . APPLICATION_PATH . '/../library' . PATH_SEPARATOR . get_include_path() ); require_once 'Fisma.php'; require_once 'Zend/Application.php'; $application = new Zend_Application( APPLICATION_ENV, APPLICATION_PATH . '/config/application.ini' ); Fisma::setAppConfig($application->getOptions()); Fisma::initialize(Fisma::RUN_MODE_WEB_APP); $application->bootstrap()->run(); } catch (Zend_Config_Exception $zce) { // A zend config exception indicates that the application may not be installed properly echo '<h1>The application is not installed correctly</h1>'; $zceMsg = $zce->getMessage(); if (stristr($zceMsg, 'parse_ini_file') !== false) { if (stristr($zceMsg, 'application.ini') !== false) { if (stristr($zceMsg, 'No such file or directory') !== false) { echo 'The ' . APPLICATION_PATH . '/config/application.ini file is missing.'; } elseif (stristr($zceMsg, 'Permission denied') !== false) { echo 'The ' . APPLICATION_PATH . '/config/application.ini file does not have the ' . 'appropriate permissions set for the application to read it.'; } else { echo 'An ini-parsing error has occured in ' . APPLICATION_PATH . '/config/application.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } } else if (stristr($zceMsg, 'database.ini') !== false) { if (stristr($zceMsg, 'No such file or directory') !== false) { echo 'The ' . APPLICATION_PATH . '/config/database.ini file is missing.<br/>'; echo 'If you find a database.ini.template file in the config directory, edit this file ' . 'appropriately and rename it to database.ini'; } elseif (stristr($zceMsg, 'Permission denied') !== false) { echo 'The ' . APPLICATION_PATH . '/config/database.ini file does not have the appropriate ' . 'permissions set for the application to read it.'; } else { echo 'An ini-parsing error has occured in ' . APPLICATION_PATH . '/config/database.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } } else { echo 'An ini-parsing error has occured. <br/>Please check all configuration files and make sure ' . 'everything is setup correctly'; } } elseif (stristr($zceMsg, 'syntax error') !== false) { if (stristr($zceMsg, 'application.ini') !== false) { echo 'There is a syntax error in ' . APPLICATION_PATH . '/config/application.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } elseif (stristr($zceMsg, 'database.ini') !== false) { echo 'There is a syntax error in ' . APPLICATION_PATH . '/config/database.ini ' . '<br/>Please check this file and make sure everything is setup correctly.'; } else { echo 'A syntax error has been reached. <br/>Please check all configuration files and make sure ' . 'everything is setup correctly.'; } } else { // Then the exception message says nothing about parse_ini_file nor 'syntax error' echo 'Please check all configuration files, and ensure all settings are valid.'; } echo '<br/>For more information and help on installing OpenFISMA, please refer to the ' . '<a target="_blank" href="http://manual.openfisma.org/display/ADMIN/Installation">' . 'Installation Guide</a>'; } catch (Doctrine_Manager_Exception $dme) { echo '<h1>An exception occurred while bootstrapping the application.</h1>'; // Does database.ini have valid settings? Or is it the same content as database.ini.template? $databaseIniFail = false; $iniData = file(APPLICATION_PATH . '/config/database.ini'); $iniData = str_replace(chr(10), '', $iniData); if (in_array('db.adapter = ##DB_ADAPTER##', $iniData)) { $databaseIniFail = true; } if (in_array('db.host = ##DB_HOST##', $iniData)) { $databaseIniFail = true; } if (in_array('db.port = ##DB_PORT##', $iniData)) { $databaseIniFail = true; } if (in_array('db.username = ##DB_USER##', $iniData)) { $databaseIniFail = true; } if (in_array('db.password = ##DB_PASS##', $iniData)) { $databaseIniFail = true; } if (in_array('db.schema = ##DB_NAME##', $iniData)) { $databaseIniFail = true; } if ($databaseIniFail) { echo 'You have not applied the settings in ' . APPLICATION_PATH . '/config/database.ini appropriately. ' . 'Please review the contents of this file and try again.'; } else { if (Fisma::debug()) { echo '<p>' . get_class($dme) . '</p><p>' . $dme->getMessage() . '</p><p>' . "<p><pre>Stack Trace:\n" . $dme->getTraceAsString() . '</pre></p>'; } else { $logString = get_class($dme) . "\n" . $dme->getMessage() . "\nStack Trace:\n" . $dme->getTraceAsString() . "\n"; Zend_Registry::get('Zend_Log')->err($logString); } } } catch (Exception $exception) { // If a bootstrap exception occurs, that indicates a serious problem, such as a syntax error. // We won't be able to do anything except display an error. echo '<h1>An exception occurred while bootstrapping the application.</h1>'; if (Fisma::debug()) { echo '<p>' . get_class($exception) . '</p><p>' . $exception->getMessage() . '</p><p>' . "<p><pre>Stack Trace:\n" . $exception->getTraceAsString() . '</pre></p>'; } else { $logString = get_class($exception) . "\n" . $exception->getMessage() . "\nStack Trace:\n" . $exception->getTraceAsString() . "\n"; Zend_Registry::get('Zend_Log')->err($logString); } }

    Read the article

  • What do we call to "non-programmers" ? ( Like "muggle" in HP ) [closed]

    - by OscarRyz
    Sometimes I want to refer to people without coding powers as Muggles. But it doesn't quite feel right. Gamers have n00b ( but still a n00b has some notion of gaming ) I mean, for all those who Windows in the only OS in the world ( what's an OS ? would they ask ) For project manager who can't distinguish between excel and a database. For those who exclaim "Wooow! when you show them the ctrl-right click to see the webpage source code. What would be a good word to describe to these "persons without lack of coding ability?" Background I didn't mean to be disrespectful with ordinary people. It's just, sometimes it drives me nuts seeing coworkers struggling trying to explain to these "people" some concept. For instance, recently we were asked, what a "ear" was (in Java). My coworker was struggling on how to explain what is was, and how it differ from .war, .jar, etc. and talking about EJB's application server, deployment etc, and our "people"1 was like o_O. I realize a better way to explain was "Think about it as an installer for the application, similar to install.exe" and he understood immediately. This is none's fault, it is sometimes our "poeple" come from different background, that's it. Is our responsibility to talk at a level they can understand, some coworkers, don't get it and try very hard to explain programming concepts ( like the source code in the browser ). But I get the point, we I don't need to be disrespectful. ... But, I'm considering call them pebkac's 1As suggested

    Read the article

  • Random touchpad and keyboard freezes on new installation

    - by ancaleth
    My touchpad and keyboard freeze up on my newly installed Ubuntu 10.10. They remain frozen until you shut down manually. No keys work, cursor doesn't move - it's like a screenshot. I was using Ubuntu 10.4 via wubi before on this Laptop where this problem never occurred. (I did not migrate wubi or upgrade to 10.10, it's a fresh start. 64-bit on Dell Studio, plenty of RAM, plenty of free space on partition etc.) I can't say there is a pattern yet, once it happened during the download of packages with the Update Manager, once it was just using Firefox, no other program running. In between these crashes the laptop was booted once, updates were installed etc., firefox was used and there weren't any problems. Both crashes should be in the attached kern.log and I noticed there were some error problems before the last crash (at the end, obviously). It seems the wireless was experiencing problems. This wasn't noticed on the user end, since the touchpad + keyboard were already frozen. kern.log: http://paste.ubuntu.com/552617/ How can the freezes be fixed? Edit: I will try Ctrl+Alt+F1 and then Ctrl+Alt+F7 when next freeze occurs, to see if it works again after this, as suggested here. But the keyboard seemed pretty frozen to me.

    Read the article

  • Stream Media and Live TV Across the Internet with Orb

    - by DigitalGeekery
    Looking for a way to stream your media collection across the Internet? Or perhaps watch and record TV remotely? Today we are going to look at how to do all that and more with Orb. Requirements Windows XP / Vista / 7 or Intel based Mac w/ OS X 10.5 or later. 1 GB RAM or more Pentium 4 2.4 GHz or higher / AMD Athlon 3200+ Broadband connections TV Tuner for streaming and recording live TV (optional) Note: Slower internet connections may result in stuttering during playback. Installation and Setup Download and install Orb on your home computer. (Download link below) You’ll want to take the defaults for the initial portion of the install. When we get to the Orb Account setup portion of the install is when we will have to enter information and make some decisions. Choose your language and click Next. We’ll need to create and user account and password. A valid email address is required as we’ll need to confirm the account later. Click Next.   Now you’ll want to choose your media sources. Orb will automatically look for folders that may contain media files. You can add or remove folders click on the (+) or (-) buttons. To remove a folder, click on it once to select it from the list and then click the minus (-) button. To add a folder, click the plus (+) button and browse for the folder. You can add local folders as well as shared folders from networked computers and USB attached storage. Note: Both the host computer running Orb and the networked computer will need to be running to access shared network folders remotely. When you’ve selected all your media files, click Next. Orb will proceed to index your media files… When the indexing is complete, click Next. Orb TV Setup Note: Streaming Live TV to Macs is not currently supported. If you have a TV tuner card connected to your PC, you can opt to configure Orb to stream live or recorded TV. Click Next  to configure TV. Or, choose Skip if you don’t wish to configure Orb for TV.   If you have a Digital tuner card, type in your Zip Code and click Get List to pull your channel listings. Select a TV provider from the list and click Next. If not, click Skip.   You can select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Next choose an analog provider, if necessary, and click Next.   Select “Yes” or “No” for a set top box and click Next. Just as we did with the Digital tuner, select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Now we’re finished with the setup. Click Close. Accessing your Media Remotely Media files are accessed through a web-based interface. Before we go any further, however, we’ll need to confirm our username and password. Check your inbox for an email from Orb Networks. Click the enclosed confirmation link. You’ll be prompted to enter the username and password you selected in your browser then click Next.   Your account will be confirmed. Now, we’re ready to enjoy our media remotely. To get started, point your browser to the MyCast website from your remote computer. (See link below) Enter your credentials and click Log In. Once logged in, you’ll be presented with the MyCast Home screen. By default you’ll see a handful of “channels” such as a TV program guide, random audio and photos, video favorites, and weather. You can add, remove, or customize channels. To add additional channels, click on Add Channels at the top right…   …and select from the dropdown list. To access your full media libraries, click Open Application at the top left and select from one of the options. Live and Recorded TV If you have a TV tuner card you configured for Orb, you’ll see your program guide on the TV / Webcams screen. To watch or record a show, click on the program listing to bring up a detail box. Then click the red button to record, or the green button to play. When recording a show, you’ll see a pulsating red icon at the top right of the listing in the program guide. If you want to watch Live TV, you may be prompted to choose your media player, depending on your browser and settings. Playback should begin shortly.   Note for Windows Media Center Users If you try to stream live TV in Orb while Windows Media Center is running on your PC, you’ll get an error message. Click the Stop MediaCenter button and then try again.   Audio On the Audio screen, you’ll find your music files indexed by genre, artist, and album. You can play a selection by clicking once and then clicking the green play button, or by simply double-clicking.   Playback will begin in the default media player for the streaming format.   Video Video works essentially the same as audio. Click on a selection and press the green play button, or double-click on the video title. Video playback will begin in the default media player for the streaming format.   Streaming Formats You can change the default streaming format in the control panel settings. To access the Control Panel, click on Open Applications  and select Control Panel. You can also click Settings at the top right.   Select General from the drop down list and then click on the Streaming Formats tab. You are provided four options. Flash, Windows Media, .SDP, and .PLS.   Creating Playlists To create playlists, drag and drop your media title to the playlist work area on the right, or click Add to playlist on the top menu. Click Save when finished.    Sharing your Media Orb allows you to share media playlists across the Internet with friends and family. There are a few ways to accomplish this. We’ll start by click the Share button at the bottom of the playlist work area after you’ve compiled your playlist. You’ll be prompted to choose a method by which to share your playlist. You’ll have the option to share your playlist publicly or privately. You can share publically through links, blogs, or on your Orb public profile.  By choosing the Public Profile option, Orb will automatically create a profile page for you with a URL like http://public.orb.com/username that anyone can easily access on the Internet. The private sharing option allows you to invite friends by email and requires recipients to register with Orb. You can also give your playlist a custom name, or accept the auto-generated title. Click OK when finished. Users who visit your public profile will be able to view and stream any of your shared playlists to their computer or supported device.   Portable Media Devices and Smartphones Orb can stream media to many portable devices and 3G phones. Streaming audio is supported on the iPhone and iPod Touch through the Safari browser. However, video and live TV streaming requires the Orb Live iPhone App.  Orb Live is available in the App store for $9.99. To stream media to your portable device, go to the MyCast website in your mobile browser and login. Browse for your media or playlist. Make a selection and play the media. Playback will begin. We found streaming music to both the Droid and the iPhone to work quite nicely. Video playback on the Droid, however, left a bit to be desired. The video looked good, but the audio tended to be out of sync. System Tray Control Panel By default Orb runs in the system tray on start up. To access the System Tray Control Panel, right-click on the Orb icon in the system tray and select Control Panel. Login with your Orb username and  password and click OK.   From here you can add or remove media sources, add manage accounts, change your password, and more. If you’d rather not run Orb on Startup, click the General icon.   Unselect the checkbox next to Start Orb when the system starts. Conclusion It may seem like a lot of steps, but getting Orb up and running isn’t terribly difficult. Orb is available for both Windows and Intel based Macs. It also supports streaming to many Game Consoles such as the Wii, PS3, and XBox 360. If you are running Windows 7 on multiple computers, you may want to check out our write-up on how to stream music and video over the Internet with Windows Media Player 12. Downloads Download Orb Logon to MyCast Similar Articles Productive Geek Tips Stream Music and Video Over the Internet with Windows Media Player 12Enable Media Streaming in Windows Home Server to Windows Media PlayerStream Media from Windows 7 to XP with VLC Media PlayerShare Digital Media With Other Computers on a Home Network with Windows 7Automatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video

    - by pinaldave
    Importing data into database is one of the most important tasks. I often receive questions regarding what is the quickest way to insert CSV data or how to import CSV Data into SQL Server Table. Honestly the process is very simple and the script is even simpler. In today’s SQL in Sixty Seconds Video we will learn how quickly we can insert CSV data into SQL Server. The steps to import CSV are very simple. Create Table Use Bulk Insert to import the data Verify the data Done! Absolutely it is that simple. More on Importing CSV Data: SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Create a Comma Delimited List Using SELECT Clause From Table Column SQL SERVER – Comma Separated Values (CSV) from Table Column SQL SERVER – Comma Separated Values (CSV) from Table Column – Part 2 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How to study for 70-573 Microsoft SharePoint 2010, Application Development

    - by ybbest
    I just passed my 70-573 exam today and would like to Share my experience on learning SharePoint 2010 as a beginner. 1. Book Microsoft SharePoint 2010: Building Solutions for SharePoint 2010 by Sahil Malik http://apress.com/book/view/1430228652 Sahil is an expert and MVP in SharePoint 2010.He certainly know his field and the book is well written. More importantly Sahil has got very good sense of humor in delivering the knowledge. 2.A development machine It is of great importance to have a dev machine , you cannot learn a new technology by just reading a book nor by watching some training videos. You need get your hands dirty with SharePoint a lot. 3.Training videos Since I have one year subscription with learndev , I use them as my learning resources. It is quite cheap , only cost US $99 for a year subscription and you will get not only the SharePoint training but the whole training library .The videos are from Appdev . Appdev training is of high quality. http://www.appdev.com/ http://www.learndevnow.com/ You can also get the videos from Microsoft SharePoint site. They are pretty good too. But bear in mind , by just watching these videos you will not learn much , you need to build a SharePoint 2010 machine and play with it .Try to write the sample code yourself and not just copy and paste. 4. Write blogs about your learning. This will motive you in your long journey with SharePoint learning. 5. Do check out the patterns & practices SharePoint Guidance on codeplex. http://spg.codeplex.com/ 6.Thanks for Becky Bertram,who kindly put up all the exam requirements with links to MSDN http://blog.beckybertram.com/Lists/Exam%2070573%20Study%20Guide/AllItems.aspx

    Read the article

  • error 503: service unavailable when using apt-get update behind proxy

    - by ubuntu2man
    Hi, I am using a transparent proxy (other box). When I try to do an 'apt-get update' I get these warnings (in german): ... W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/restricted/source/Sources.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/universe/source/Sources.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/multiverse/source/Sources.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/main/binary-i386/Packages.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/restricted/binary-i386/Packages.gz 503 Service Unavailable W: Fehlschlag beim Holen von http://security.ubuntu.com/ubuntu/dists/maverick-security/universe/binary-i386/Packages.gz 503 Service Unavailable E: Einige Indexdateien konnten nicht heruntergeladen werden, sie wurden ignoriert oder alte an ihrer Stelle benutzt. I changed ~.bashrc: http_proxy=http://192.168.120.199:8080 https_proxy=https://192.168.120:8080 export http_proxy export https_proxy I wrote on commandline: export http_proxy=http://proxyusername:proxypassword@proxyaddress:proxyport sudo apt-get update And I edited /etc/apt/apt.conf: Acquire::http::proxy "http://192.168.120.199:8080/"; Acquire::ftp::proxy "http://192.168.120.199:8080/"; Nothing has worked. Does anyone knows how to make apt-get working through a transparent proxy? Regards, ubuntu2man

    Read the article

  • Visual Studio 2010 Hosting :: Connect to MySQL Database from Visual Studio VS2010

    - by mbridge
    So, in order to connect to a MySql database from VS2010 you need to 1. download the latest version of the MySql Connector/NET from http://www.mysql.com/downloads/connector/net/ 2. install the connector (if you have an older version you need to remove it from Control Panel -> Add / Remove Programs) 3. open Visual Studio 2010 4. open Server Explorer Window (View -> Server Explorer) 5. use Connect to Database button 6. in the Choose Data Source windows select MySql Database and press Continue 7. in the Add Connection window - set server name: 127.0.0.1 or localhost for MySql server running on local machine or an IP address for a remote server - username and password - if the the above data is correct and the connection can be made, you have the possibility to select the database If you want to connect to a MySql database from a C# application (Windows or Web) you can use the next sequence: //define the connection reference and initialize it MySql.Data.MySqlClient.MySqlConnection msqlConnection = null; msqlConnection = new MySql.Data.MySqlClient.MySqlConnection(“server=localhost;user id=UserName;Password=UserPassword;database=DatabaseName;persist security info=False”);     //define the command reference MySql.Data.MySqlClient.MySqlCommand msqlCommand = new MySql.Data.MySqlClient.MySqlCommand();     //define the connection used by the command object msqlCommand.Connection = this.msqlConnection;     //define the command text msqlCommand.CommandText = "SELECT * FROM TestTable;"; try {     //open the connection     this.msqlConnection.Open();     //use a DataReader to process each record     MySql.Data.MySqlClient.MySqlDataReader msqlReader = msqlCommand.ExecuteReader();     while (msqlReader.Read())     {         //do something with each record     } } catch (Exception er) {     //do something with the exception } finally {     //always close the connection     this.msqlConnection.Close(); }.

    Read the article

  • How bad is it to use display: none in CSS?

    - by Andy
    I've heard many times that it's bad to use display: none for SEO reasons, as it could be an attempt to push in irrelevant popular keywords. A few questions: Is that still received wisdom? Does it make a difference if you're only hiding a single word, or perhaps a single character? If you should avoid any use of it, what are the preferred techniques for hiding (in situations where you need it to become visible again on certain conditions)? Some references I've found so far: Matt Cutts from 2005 in a comment If you're straight-out using CSS to hide text, don't be surprised if that is called spam. I'm not saying that mouseovers or DHTML text or have-a-logo-but-also-have-text is spam; I answered that last one at a conference when I said "imagine how it would look to a visitor, a competitor, or someone checking out a spam report. If you show your company's name and it's Expo Markers instead of an Expo Markers logo, you should be fine. If the text you decide to show is 'Expo Markers cheap online discount buy online Expo Markers sale ...' then I would be more cautious, because that can look bad." And in another comment on the same article We can flag text that appears to be hidden using CSS at Google. To date we have not algorithmically removed sites for doing that. We try hard to avoid throwing babies out with bathwater. (My emphasis) Eric Enge said in 2008 The legitimate use of this technique is so prevalent that I would rarely expect search engines to penalize a site for using the display: none attribute. It’s just very difficult to implement an algorithm that could truly ferret out whether the particular use of display: none is meant to deceive the search engines or not. Thanks in advance, Andy

    Read the article

  • We Don’t Need No Regions

    - by João Angelo
    If your code reaches a level where you want to hide it behind regions then you have a problem that regions won’t solve. Regions are good to hide things that you don’t want to have knowledge about such as auto-generated code. Normally, when you’re developing you end up reading more code than you write it so why would you want to complicate the reading process. I, for one, would love to have that one discussion around regions where someone convinces me that they solve a problem that has no other alternative solution, but I’m still waiting. The most frequent argument I hear about regions is that they allow you to structure your code, but why don’t just structure it using classes, methods and all that other stuff that OOP is about because at the end of the day, you should be doing object oriented programming and not region oriented programming. Having said that, I do believe that sometimes is helpful to have a quick overview of a code file contents and Visual Studio allows you to do just that through the Collapse to Definitions command (CTRL + M, CTRL + O) which collapses the members of all types; if you like regions, you should try this, it is much more useful to read all the members of a type than all the regions inside a type.

    Read the article

  • Handling EJB/JPA exceptions in a “beautiful” way

    - by Rodrigues, Raphael
    In order to handle JPA exceptions, there are some ways already detailed in lots of blogs. Here, I intend to show one of them, which I consider kind of “beauty”. My use case has a unique constraint, when the User try to create a duplicate value in database. The JPA throws a java.sql.SQLIntegrityConstraintViolationException, and I have to catch it and replace the message. In fact, ADF Business Components framework already has a beautiful solution for this very well documented here . However, for EJB/JPA there's no similar approach. In my case, what I had to do was: 1. Create a custom Error Handler Class in DataBindings file; a. Here is how you accomplish it. 2. Override the reportException method and check if the type of exception exists on property file a. If yes, I change the message and rethrows the exception b. If not, go on the execution The main goal of this approach is whether a new or unhandled Exception was raised, the job is, only create a single entry in bundle property file. Here are pictures step by step: 1. CustomExceptionHandler.java 2. Databindings.cpx 3. Bundle file 4. jspx: 5. Stacktrace: Give your opinion, what did you think about that?

    Read the article

  • So i ran sudo apt-get install kubuntu-full on my Ubuntu... and saw all the apps...now I want it off...help?

    - by Alex Poulos
    I'm running 12.04 - I installed kubuntu to try it out and realized that with all the bloatware applications that I didn't want it anymore - I was able to uninstall the kubuntu-desktop but there are still packages left over... How can I make sure I get rid of EVERYTHING Kubuntu installed - even the kde leftovers? Here's some of what's left when I ran sudo apt-get autoremove kde then "tab" it displayed this: kdeaccessibility kdepim-runtime kdeadmin kde-runtime kde-baseapps kde-runtime-data kde-baseapps-bin kdesdk-dolphin-plugins kde-baseapps-data kde-style-oxygen kde-config-cron kdesudo kde-config-gtk kdeutils kde-config-touchpad kde-wallpapers kdegames-card-data kde-wallpapers-default kdegames-card-data-extra kde-window-manager kde-icons-mono kde-window-manager-common kdelibs5-data kde-workspace kdelibs5-plugins kde-workspace-bin kdelibs-bin kde-workspace-data kdemultimedia-kio-plugins kde-workspace-data-extras kdenetwork kde-workspace-kgreet-plugins kdenetwork-filesharing kde-zeroconf kdepasswd kdf kdepim-kresources kdm kdepimlibs-kio-plugins kdoctools Those are all installed by kubuntu... correct? I just want to go back to my Ubuntu 12.04LTS with Gnome2-classic and without all the kubuntu extras. I started it off by just removing unnecessary apps that came with kubuntu-full - then realized I didnt want the whole thing at all and uninstalled kubuntu-full but it still says I have these as well: alex@griever:~$ sudo apt-get --purge remove kubuntu- kubuntu-debug-installer kubuntu-netbook-default-settings kubuntu-default-settings kubuntu-notification-helper kubuntu-firefox-installer kubuntu-web-shortcuts

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >