Search Results

Search found 12089 results on 484 pages for 'rule of three'.

Page 431/484 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Copy constructor bug

    - by user168715
    I'm writing a simple nD-vector class, but am encountering a strange bug. I've stripped out the class to the bare minimum that still reproduces the bug: #include <iostream> using namespace std; template<unsigned int size> class nvector { public: nvector() {data_ = new double[size];} ~nvector() {delete[] data_;} template<unsigned int size2> nvector(const nvector<size2> &other) { data_ = new double[size]; int i=0; for(; i<size && i < size2; i++) data_[i] = other[i]; for(; i<size; i++) data_[i] = 0; } double &operator[](int i) {return data_[i];} const double&operator[](int i) const {return data_[i];} private: const nvector<size> &operator=(const nvector<size> &other); //Intentionally unimplemented for now double *data_; }; int main() { nvector<2> vector2d; vector2d[0] = 1; vector2d[1] = 2; nvector<3> vector3d(vector2d); for(int i=0; i<3; i++) cout << vector3d[i] << " "; cout << endl; //Prints 1 2 0 nvector<3> other3d(vector3d); for(int i=0; i<3; i++) cout << other3d[i] << " "; cout << endl; //Prints 1 2 0 } //Segfault??? On the surface this seems to work fine, and both tests print out the correct values. However, at the end of main the program crashes with a segfault, which I've traced to nvector's destructor. At first I thought the (incorrect) default assignment operator was somehow being called, which is why I added the (currently) unimplemented explicit assignment operator to rule this possibility out. So my copy constructor must be buggy, but I'm having one of those days where I'm staring at extremely simple code and just can't see it. Do you guys have any ideas?

    Read the article

  • How do I write object classes effectively when dealing with table joins?

    - by Chris
    I should start by saying I'm not now, nor do I have any delusions I'll ever be a professional programmer so most of my skills have been learned from experience very much as a hobby. I learned PHP as it seemed a good simple introduction in certain areas and it allowed me to design simple web applications. When I learned about objects, classes etc the tutor's basic examnples covered the idea that as a rule of thumb each database table should have its own class. While that worked well for the photo gallery project we wrote, as it had very simple mysql queries, it's not working so well now my projects are getting more complex. If I require data from two separate tables which require a table join I've instead been ignoring the class altogether and handling it on a case by case basis, OR, even worse been combining some of the data into the class and the rest as a separate entity and doing two queries, which to me seems inefficient. As an example, when viewing content on a forum I wrote, if you view a thread, I retrieve data from the threads table, the posts table and the user table. The queries from the user and posts table are retrieved via a join and not instantiated as an object, whereas the thread data is called using my Threads class. So how do I get from my current state of affairs to something a little less 'stupid', for want of a better word. Right now I have a DB class that deals with connection and escaping values etc, a parent db query class that deals with the common queries and methods, and all of the other classes (Thread, Upload, Session, Photo and ones thats aren't used Post, User etc ) are children of that. Do I make a big posts class that has the relevant extra attributes that I retrieve from the users (and potentially threads) table? Do I have separate classes that populate each of their relevant attributes with a single query? If so how do I do that? Because of the way my classes are written, based on what I was taught, my db update row method, or insert method both just take the attributes as an array and update all of that, if I have extra attributes from other db tables in each class then how do I rewrite those methods as obbiously updating automatically like that would result in errors? In short I think my understanding is limited right now and I'd like some pointers when it comes to the fundamentals of how to write more complex classes.

    Read the article

  • Use multiple inheritance to discriminate useage roles?

    - by Arne
    Hi fellows, it's my flight simulation application again. I am leaving the mere prototyping phase now and start fleshing out the software design now. At least I try.. Each of the aircraft in the simulation have got a flight plan associated to them, the exact nature of which is of no interest for this question. Sufficient to say that the operator way edit the flight plan while the simulation is running. The aircraft model most of the time only needs to read-acess the flight plan object which at first thought calls for simply passing a const reference. But ocassionally the aircraft will need to call AdvanceActiveWayPoint() to indicate a way point has been reached. This will affect the Iterator returned by function ActiveWayPoint(). This implies that the aircraft model indeed needs a non-const reference which in turn would also expose functions like AppendWayPoint() to the aircraft model. I would like to avoid this because I would like to enforce the useage rule described above at compile time. Note that class WayPointIter is equivalent to a STL const iterator, that is the way point can not be mutated by the iterator. class FlightPlan { public: void AppendWayPoint(const WayPointIter& at, WayPoint new_wp); void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp); void RemoveWayPoint(WayPointIter at); (...) WayPointIter First() const; WayPointIter Last() const; WayPointIter Active() const; void AdvanceActiveWayPoint() const; (...) }; My idea to overcome the issue is this: define an abstract interface class for each usage role and inherit FlightPlan from both. Each user then only gets passed a reference of the appropriate useage role. class IFlightPlanActiveWayPoint { public: WayPointIter Active() const =0; void AdvanceActiveWayPoint() const =0; }; class IFlightPlanEditable { public: void AppendWayPoint(const WayPointIter& at, WayPoint new_wp); void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp); void RemoveWayPoint(WayPointIter at); (...) }; Thus the declaration of FlightPlan would only need to be changed to: class FlightPlan : public IFlightPlanActiveWayPoint, IFlightPlanEditable { (...) }; What do you think? Are there any cavecats I might be missing? Is this design clear or should I come up with somethink different for the sake of clarity? Alternatively I could also define a special ActiveWayPoint class which would contain the function AdvanceActiveWayPoint() but feel that this might be unnecessary. Thanks in advance!

    Read the article

  • Technical non-terminating condition in a loop

    - by Snarfblam
    Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error. void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254 for(; i < 255; i += 2) { Console.WriteLine(i); } } Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream: int CountZeros(Stream s) { int total = 0; while(s.ReadByte() == 0) total++; return total; } Now, suppose you feed it this thing: class InfiniteEmptyStream:Stream { // ... Other members ... public override int Read(byte[] buffer, int offset, int count) { Array.Clear(buffer, offset, count); // Output zeros return count; // Never returns -1 (end of stream) } } Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't. A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source). How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API? Edit: One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.

    Read the article

  • Is there some way to make variables like $a and $b in regard to strict?

    - by Axeman
    In light of Michael Carman's comment, I have decided to rewrite the question. Note that 11 comments appear before this edit, and give credence to Michael's observation that I did not write the question in a way that made it clear what I was asking. Question: What is the standard--or cleanest way--to fake the special status that $a and $b have in regard to strict by simply importing a module? First of all some setup. The following works: #!/bin/perl use strict; print "\$a=$a\n"; print "\$b=$b\n"; If I add one more line: print "\$c=$c\n"; I get an error at compile time, which means that none of my dazzling print code gets to run. If I comment out use strict; it runs fine. Outside of strictures, $a and $b are mainly special in that sort passes the two values to be compared with those names. my @reverse_order = sort { $b <=> $a } @unsorted; Thus the main functional difference about $a and $b--even though Perl "knows their names"--is that you'd better know this when you sort, or use some of the functions in List::Util. It's only when you use strict, that $a and $b become special variables in a whole new way. They are the only variables that strict will pass over without complaining that they are not declared. : Now, I like strict, but it strikes me that if TIMTOWTDI (There is more than one way to do it) is Rule #1 in Perl, this is not very TIMTOWDI. It says that $a and $b are special and that's it. If you want to use variables you don't have to declare $a and $b are your guys. If you want to have three variables by adding $c, suddenly there's a whole other way to do it. Nevermind that in manipulating hashes $k and $v might make more sense: my %starts_upper_1_to_25 = skim { $k =~ m/^\p{IsUpper}/ && ( 1 <= $v && $v <= 25 ) } %my_hash ;` Now, I use and I like strict. But I just want $k and $v to be visible to skim for the most compact syntax. And I'd like it to be visible simply by use Hash::Helper qw<skim>; I'm not asking this question to know how to black-magic it. My "answer" below, should let you know that I know enough Perl to be dangerous. I'm asking if there is a way to make strict accept other variables, or what is the cleanest solution. The answer could well be no. If that's the case, it simply does not seem very TIMTOWTDI.

    Read the article

  • IE8 ignores absolute positioning and margin:auto

    - by tuff
    I have a lightbox-style div with scrolling content that I am trying to restrict to a reasonable size within the viewport. I also want this div to be horizontally centered. This is all easy in Fx/Chrome/IE9. My problem is that IE8 ignores the absolute positioning which I use to size the content, and the rule margin: 0 auto which I use to horizontally center the lightbox. 1) Why? 2) What are my options for workarounds? EDIT: The centering issue is fixed by setting text-align:center on the parent element, but I have no idea why that works since the element I want to center is not inline. Still stuck on the absolute positioning stuff. HTML: <div class="bg"> <div class="a"> <div class="aa">titlebar</div> <div class="b"> <!-- many lines of content here --> </div> </div> </div> CSS: body { overflow: hidden; height: 100%; margin: 0; padding: 0; } /* IE8 needs ruleset above */ .bg { background: #333; position: fixed; top: 0; right: 0; bottom: 0; left: 0; height: 100%; /* needed in IE8 or the bg will only be as tall as the lightbox */ } .a { background: #eee; border: 3px solid #000; height: 80%; max-height: 800px; min-height: 200px; margin: 0 auto; position: relative; width: 80%; min-width: 200px; max-width: 800px; } .aa { background: lightblue; height: 28px; line-height: 28px; text-align: center; } .b { background: coral; overflow: auto; padding: 20px; position: absolute; top: 30px; right: 0; bottom: 0; left: 0; } Here's a demo of the problem: http://jsbin.com/urikoj/1/edit

    Read the article

  • Creating array from two arrays

    - by binoculars
    I'm having troubles trying to create a certain array. Basicly, I have an array like this: [0] => Array ( [id] => 12341241 [type] => "Blue" ) [1] => Array ( [id] => 52454235 [type] => "Blue" ) [2] => Array ( [id] => 848437437 [type] => "Blue" ) [3] => Array ( [id] => 387372723 [type] => "Blue" ) [4] => Array ( [id] => 73732623 [type] => "Blue" ) ... Next, I have an array like this: [0] => Array ( [id] => 34141 [type] => "Red" ) [1] => Array ( [id] => 253532 [type] => "Red" ) [2] => Array ( [id] => 94274 [type] => "Red" ) I want to construct an array, which is a combination of the two above, using this rule: after 3 Blues, there must be a Red: Blue1 Blue2 Blue3 Red1 Blue4 Blue5 Blue6 Red2 Blue7 Blue8 Blue9 Red3 Note that the their can be more Red's than Blue's, but also more Blue's than Red's. If the Red's run out, it should begin with the first one again. Example: let's say there are only two Red's: Blue1 Blue2 Blue3 Red1 Blue4 Blue5 Blue6 Red2 Blue7 Blue8 Blue9 Red1 ... ... If the Blue's run out, the Red's should append until they run out too. Example: let's say there are 5 Blue's, and 5 Red's: Blue1 Blue2 Blue3 Red1 Blue4 Blue5 Red2 Red3 Red4 Red5 Note: the arrays come from mysql-fetches. Maybe it's better to fetch them while building the new array? Anyway, the while-loops got to me, I can't figure it out... Any help is much appreciated!

    Read the article

  • Is it thread safe to read a form controls value (but not change it) without using Invoke/BeginInvoke from another thread

    - by goku_da_master
    I know you can read a gui control from a worker thread without using Invoke/BeginInvoke because my app is doing it now. The cross thread exception error is not being thrown and my System.Timers.Timer thread is able to read gui control values just fine (unlike this guy: can a worker thread read a control in the GUI?) Question 1: Given the cardinal rule of threads, should I be using Invoke/BeginInvoke to read form control values? And does this make it more thread-safe? The background to this question stems from a problem my app is having. It seems to randomly corrupt form controls another thread is referencing. (see question 2) Question 2: I have a second thread that needs to update form control values so I Invoke/BeginInvoke to update those values. Well this same thread needs a reference to those controls so it can update them. It holds a list of these controls (say DataGridViewRow objects). Sometimes (not always), the DataGridViewRow reference gets "corrupt". What I mean by corrupt, is the reference is still valid, but some of the DataGridViewRow properties are null (ex: row.Cells). Is this caused by question 1 or can you give me any tips on why this might be happening? Here's some code (the last line has the problem): public partial class MyForm : Form { void Timer_Elapsed(object sender) { // we're on a new thread (this function gets called every few seconds) UpdateUiHelper updateUiHelper = new UpdateUiHelper(this); foreach (DataGridViewRow row in dataGridView1.Rows) { object[] values = GetValuesFromDb(); updateUiHelper.UpdateRowValues(row, values[0]); } // .. do other work here updateUiHelper.UpdateUi(); } } public class UpdateUiHelper { private readonly Form _form; private Dictionary<DataGridViewRow, object> _rows; private delegate void RowDelegate(DataGridViewRow row); private readonly object _lockObject = new object(); public UpdateUiHelper(Form form) { _form = form; _rows = new Dictionary<DataGridViewRow, object>(); } public void UpdateRowValues(DataGridViewRow row, object value) { if (_rows.ContainsKey(row)) _rows[row] = value; else { lock (_lockObject) { _rows.Add(row, value); } } } public void UpdateUi() { foreach (DataGridViewRow row in _rows.Keys) { SetRowValueThreadSafe(row); } } private void SetRowValueThreadSafe(DataGridViewRow row) { if (_form.InvokeRequired) { _form.Invoke(new RowDelegate(SetRowValueThreadSafe), new object[] { row }); return; } // now we're on the UI thread object newValue = _rows[row]; row.Cells[0].Value = newValue; // randomly errors here with NullReferenceException, but row is never null! }

    Read the article

  • Pass variables between separate instances of ruby (without writing to a text file or database)

    - by boulder_ruby
    Lets say I'm running a long worker-script in one of several open interactive rails consoles. The script is updating columns in a very, very, very large table of records. I've muted the ActiveRecord logger to speed up the process, and instruct the script to output some record of progress so I know how roughly how long the process is going to take. That is what I am currently doing and it would look something like this: ModelName.all.each_with_index do |r, i| puts i if i % 250 ...runs some process... r.save end Sometimes its two nested arrays running, such that there would be multiple iterators and other things running all at once. Is there a way that I could do something like this and access that variable from a separate rails console? (such that the variable would be overwritten every time the process is run without much slowdown) records = ModelName.all $total = records.count records.each_with_index do |r, i| $i = i ...runs some process... r.save end meanwhile mid-process in other console puts "#{($i/$total * 100).round(2)}% complete" #=> 67.43% complete I know passing global variables from one separate instance of ruby to the next doesn't work. I also just tried this to no effect as well unix console 1 $X=5 echo {$X} #=> 5 unix console 2 echo {$X} #=> "" Lastly, I also know using global variables like this is a major software design pattern no-no. I think that's reasonable, but I'd still like to know how to break that rule if I'd like. Writing to a text file obviously would work. So would writing to a separate database table or something. That's not a bad idea. But the really cool trick would be sharing a variable between two instances without writing to a text file or database column. What would this be called anyway? Tunneling? I don't quite know how to tag this question. Maybe bad-idea is one of them. But honestly design-patterns isn't what this question is about.

    Read the article

  • Encapsulating a Windows.Forms.Button

    - by devoured elysium
    I want to define a special kind of button that only allows two possible labels: "ON" and "OFF". I decided to inherit from a Windows.Forms.Button to implement this but now I don't know I how should enforce this rule. Should I just override the Text property like this? public override string Text { set { throw new InvalidOperationException("Invalid operation on StartStopButton!"); } } The problem I see with this is that I am breaking the contract that all buttons should have. If any code tries something like foreach (Button button in myForm) { button.Text = "123"; } they will get an Exception if I have any of my special buttons on the form, which is something that isn't expectable. First, because people think of properties just as "public" variables, not methods, second, because they are used to using and setting whatever they want to buttons without having to worry with Exceptions. Should I instead just make the set property do nothing? That could also lead to awkward results: myButton.Text = "abc"; MessageBox.Show(abc); //not "abc"! The general idea from the OO world is to in this kind of cases use Composition instead of inheritance. public class MySpecialButton : <Some class from System.Windows.Forms that already knows how to draw itself on forms> private Button button = new Button(); //I'd just draw this button on this class //and I'd then only show the fields I consider //relevant to the outside world. ... } But to make the Button "live" on a form it must inherit from some special class. I've looked on Control, but it seems to already have the Text property defined. I guess the ideal situation would be to inherit from some kind of class that wouldn't even have the Text property defined, but that'd have position, size, etc properties available. Upper in the hierarchy, after Control, we have Component, but that looks like a really raw class. Any clue about how to achieve this? I know this was a long post :( Thanks

    Read the article

  • HttpWebRequest and Ignoring SSL Certificate Errors

    - by Rick Strahl
    Man I can't believe this. I'm still mucking around with OFX servers and it drives me absolutely crazy how some these servers are just so unbelievably misconfigured. I've recently hit three different 3 major brokerages which fail HTTP validation with bad or corrupt certificates at least according to the .NET WebRequest class. What's somewhat odd here though is that WinInet seems to find no issue with these servers - it's only .NET's Http client that's ultra finicky. So the question then becomes how do you tell HttpWebRequest to ignore certificate errors? In WinInet there used to be a host of flags to do this, but it's not quite so easy with WebRequest. Basically you need to configure the CertificatePolicy on the ServicePointManager by creating a custom policy. Not exactly trivial. Here's the code to hook it up: public bool CreateWebRequestObject(string Url) {    try     {        this.WebRequest =  (HttpWebRequest) System.Net.WebRequest.Create(Url);         if (this.IgnoreCertificateErrors)            ServicePointManager.CertificatePolicy = delegate { return true; };}One thing to watch out for is that this an application global setting. There's one global ServicePointManager and once you set this value any subsequent requests will inherit this policy as well, which may or may not be what you want. So it's probably a good idea to set the policy when the app starts and leave it be - otherwise you may run into odd behavior in some situations especially in multi-thread situations.Another way to deal with this is in you application .config file. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <configuration>   <system.net>     <settings>       <servicePointManager           checkCertificateName="false"           checkCertificateRevocationList="false"                />     </settings>   </system.net> </configuration> This seems to work most of the time, although I've seen some situations where it doesn't, but where the code implementation works which is frustrating. The .config settings aren't as inclusive as the programmatic code that can ignore any and all cert errors - shrug. Anyway, the code approach got me past the stopper issue. It still amazes me that theses OFX servers even require this. After all this is financial data we're talking about here. The last thing I want to do is disable extra checks on the certificates. Well I guess I shouldn't be surprised - these are the same companies that apparently don't believe in XML enough to generate valid XML (or even valid SGML for that matter)...© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp  HTTP  

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • SQL SERVER – Signal Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Signal Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Signal Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Signal Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the Signalwait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the Signal wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the Signal wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • SQL SERVER – Single Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Single Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Single Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Single Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the single wait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the single wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the single wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Install the Ajax Control Toolkit from NuGet

    - by Stephen Walther
    The Ajax Control Toolkit is now available from NuGet. This makes it super easy to add the latest version of the Ajax Control Toolkit to any Web Forms application. If you haven’t used NuGet yet, then you are missing out on a great tool which you can use with Visual Studio to add new features to an application. You can use NuGet with both ASP.NET MVC and ASP.NET Web Forms applications. NuGet is compatible with both Websites and Web Applications and it works with both C# and VB.NET applications. For example, I habitually use NuGet to add the latest version of ELMAH, Entity Framework, jQuery, jQuery UI, and jQuery Templates to applications that I create. To download NuGet, visit the NuGet website at: http://NuGet.org Imagine, for example, that you want to take advantage of the Ajax Control Toolkit RoundedCorners extender to create cross-browser compatible rounded corners in a Web Forms application. Follow these steps. Right click on your project in the Solution Explorer window and select the option Add Library Package Reference. In the Add Library Package Reference dialog, select the Online tab and enter AjaxControlToolkit in the search box: Click the Install button and the latest version of the Ajax Control Toolkit will be installed. Installing the Ajax Control Toolkit makes several modifications to your application. First, a reference to the Ajax Control Toolkit is added to your application. In a Web Application Project, you can see the new reference in the References folder: Installing the Ajax Control Toolkit NuGet package also updates your Web.config file. The tag prefix ajaxToolkit is registered so that you can easily use Ajax Control Toolkit controls within any page without adding a @Register directive to the page. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> You should do a rebuild of your application by selecting the Visual Studio menu option Build, Rebuild Solution so that Visual Studio picks up on the new controls (You won’t get Intellisense for the Ajax Control Toolkit controls until you do a build). After you add the Ajax Control Toolkit to your application, you can start using any of the 40 Ajax Control Toolkit controls in your application (see http://www.asp.net/ajax/ajaxcontroltoolkit/samples/ for a reference for the controls). <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="WebApplication1.WebForm1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Rounded Corners</title> <style type="text/css"> #pnl1 { background-color: gray; width: 200px; color:White; font: 14pt Verdana; } #pnl1_contents { padding: 10px; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:Panel ID="pnl1" runat="server"> <div id="pnl1_contents"> I have rounded corners! </div> </asp:Panel> <ajaxToolkit:ToolkitScriptManager ID="sm1" runat="server" /> <ajaxToolkit:RoundedCornersExtender TargetControlID="pnl1" runat="server" /> </div> </form> </body> </html> The page contains the following three controls: Panel – The Panel control named pnl1 contains the content which appears with rounded corners. ToolkitScriptManager – Every page which uses the Ajax Control Toolkit must contain a single ToolkitScriptManager. The ToolkitScriptManager loads all of the JavaScript files used by the Ajax Control Toolkit. RoundedCornersExtender – This Ajax Control Toolkit extender targets the Panel control. It makes the Panel control appear with rounded corners. You can control the “roundiness” of the corners by modifying the Radius property. Notice that you get Intellisense when typing the Ajax Control Toolkit tags. As soon as you type <ajaxToolkit, all of the available Ajax Control Toolkit controls appear: When you open the page in a browser, then the contents of the Panel appears with rounded corners. The advantage of using the RoundedCorners extender is that it is cross-browser compatible. It works great with Internet Explorer, Opera, Firefox, Chrome, and Safari even though different browsers implement rounded corners in different ways. The RoundedCorners extender even works with an ancient browser such as Internet Explorer 6. Getting the Latest Version of the Ajax Control Toolkit The Ajax Control Toolkit continues to evolve at a rapid pace. We are hard at work at fixing bugs and adding new features to the project. We plan to have a new release of the Ajax Control Toolkit each month. The easiest way to get the latest version of the Ajax Control Toolkit is to use NuGet. You can open the NuGet Add Library Package Reference dialog at any time to update the Ajax Control Toolkit to the latest version.

    Read the article

  • Associating File Types with AutoVue Desktop Deployment

    - by [email protected]
    Windows users take for granted that when they double click on a document or design, that it will open up in its application automatically. One of the questions I'm commonly asked is "How can I get the same behavior with AutoVue Desktop Deployment?". It's pretty easy, but there are a few tricks to doing it. Step 1: Download new jvue_direct.bat and icon The first thing you'll need to do is download a slightly modified version of jvue_direct.bat. You can find it here (Document 1075784.1) on Oracle's Support Portal. You also want to download the AV.ico file. This is the icon that will be used for all file types associated with AutoVue. Place both of these files in your <AutoVueInstallDirectory>\bin directory. Step 2: Associate File Types With AutoVue There are two ways to do this. You can do this through the Windows user interface, or you can set up a batch file to do this. Associating File Types Through Windows The way most people associate file types to an application is using the Windows user interface. You've probably tried to open a file type that Windows doesn't recognize and seen this window pop up: Although you can use this dialog to associate that file type with AutoVue, I don't recommend it. I much prefer using a batch file to associate file types with AutoVue. Associating File Types Using A Batch File There are a few good reasons to associate file types using a batch file instead of using the pop-up dialog method: If you have several file types to associate with AutoVue, it's much easier to use a batch file to do them all at once. Doing it through the Windows user interface requires having files of each type available. Using a batch file doesn't require having the files you're associating. Associating file types through the dialog may work well for one person, but what if you're an administrator doing an enterprise wide deployment of AutoVue Desktop Deployment for several hundred users? You don't want to do this manually for each user. You can have one simple batch file that's run on each user's PC to set up all the file types. You can easily associate an icon with the file types you're opening with AutoVue. To use the batch file method follow these steps: Create a file called filetype.bat using a text editor and copy and paste the following into it: @assoc .dwg=AVFile @assoc .jpg=AVFile @assoc .doc=AVFile @ftype AVFile="%~dp0jvue_direct.bat" "%%1" @reg add HKEY_CLASSES_ROOT\AVFile\DefaultIcon /v "" /f /d "%~dp0AV.ico" Change the lines starting with @assoc. Each of these lines associates a file extension with AutoVue. You can have as many @assoc lines as you want. Save this file in your <AutoVueInstallDirectory>\bin directory. Double click this file, or run it from a command prompt. Restart Windows to get the icons to show up. How Does This Work? The first three lines are creating a file type called AVFile. We are associating the extensions .dwg, .jpg, and .doc with this file type. You will want to change these lines when creating your own batch file. For example, to associate Microstation designs, which have extension .dgn, you should delete the @assoc lines above and add the line: @assoc .dgn=AVfile The line beginning with @ftype tells Windows that all AVFile type files should be opened using AutoVue Desktop Deployment. The final line associates the AutoVue icon with these file types. You may need to restart Windows to see the new icons. Warning: One Size Doesn't Fit All When deciding which file types should be associated with AutoVue, remember that there are different types of users using it. Your engineers may be pretty surprised to find that after installing AutoVue, double clicking their .dwg file opens up AutoVue instead of AutoCAD. If you have more than one type of AutoVue user, make sure you've considered what file types each user group will and will not want to be associated with AutoVue. If necessary, create a separate file association batch file for each user type. So that's it. In two simple steps you can double click your favorite designs and have them open automatically in AutoVue Desktop Deployment. I'd love to hear how are you using AutoVue Desktop Deployment. What other deployment tips would you be interested in learning about?

    Read the article

  • Developer’s Life – Summary of Superhero Articles

    - by Pinal Dave
    Earlier this year, I wrote an article series where I talked about developer’s life and compared it with Superhero. I have got amazing response to this series and I have been receiving quite a lots of email suggesting that I should write more blog post about them. Currently I am not planning to write more blog post but I will soon continue another series. In this blog post, I have summarized the entire series. Let me know if you want me to write about any superhero. I will see what I can do about that hero. Developer’s Life – Every Developer is a Captain America Captain America was first created as a comic book character in the 1940’s as a way to boost morale during World War II.  Aimed at a children’s audience, his legacy faded away when the war ended.  However, he has recently has a major reboot to become a popular movie character that deals with modern issues. Developer’s Life – Every Developer is the Incredible Hulk The Incredible Hulk is possibly one of the scariest superheroes out there.  All superheroes are meant to be “out of this world” and awe-inspiring, but I think most people will agree with I say The Hulk takes this to the next level.  He is the result of an industrial accident, which is scary enough in it’s own right.  Plus, when mild-mannered Bruce Banner is angered, he goes completely out-of-control and transforms into a destructive monster that he cannot control and has no memories of. Developer’s Life – Every Developer is a Wonder Woman We have focused a lot lately on this “superhero series.”  I love fantasy books and movies, and I feel like there is a lot to be learned from them.  As I am writing this series, though, I have noticed that every super hero I write about is a man.  So today, I would like to talk about the major female super hero – Wonder Woman. Developer’s Life – Every Developer is a Harry Potter Harry Potter might not be a superhero in the traditional sense, but I believe he still has a lot to teach us and show us about life as a developer.  If you have been living under a rock for the last 17 years, you might not know that Harry Potter is the main character in an extremely popular series of books and movies documenting the education and tribulation of a young wizard (and his friends). Developer’s Life – Every Developer is Like Transformers Transformers may not be superheroes – they don’t wear capes, they don’t have amazing powers outside of their size and folding ability, they’re not even human (technically).  Part of their enduring popularity is that while we are enjoying over-the-top movies, we are learning about good leadership and strong personal skills. Developer’s Life – Every Developer is a Iron Man Iron Man is another superhero who is not naturally “super,” but relies on his brain (and money) to turn him into a fighting machine.  While traditional superheroes are still popular, a three-movie franchise and incorporation into the new Avengers series shows that Iron Man is popular enough on his own. Developer’s Life – Every Developer is a Sherlock Holmes I have been thinking a lot about how developers are like super heroes, and I have written two blog posts now comparing them to Spiderman and Superman.  I have a lot of love and respect for developers, and I hope that they are enjoying these articles, and others are learning a little bit about the profession.  There is another fictional character who, while not technically asuper hero, is very powerful, and I also think stands as a good example of a developer. That character is Sherlock Holmes.  Sherlock Holmes is a British detective, first made popular at the turn of the 19thcentury by author Sir Arthur Conan Doyle.  The original Sherlock Holmes was a brilliant detective who could solve the most mind-boggling crime through simple observations and deduction. Developer’s Life – Every Developer is a Chhota Bheem Chhota Bheem is a cartoon character that is extremely popular where I live.  He is my daughter’s favorite characters.  I like to say that children love Chhota Bheem more than their parents – it is lucky for us he is not real!  Children love Chhota Bheem because he is the absolute “good guy.”  He is smart, loyal, and strong.  He and his friends live in Dholakpur and fight off their many enemies – and always win – in every episode.  In each episode, they learn something about friendship, bravery, and being kind to others.  Chhota Bheem is a good role model for children, and I think that he is a good role model for developers are well. Developer’s Life – Every Developer is a Batman Batman is one of the darkest superheroes in the fantasy canon.  He does not come to his powers through any sort of magical coincidence or radioactive insect, but through a lot of psychological scarring caused by witnessing the death of his parents.  Despite his dark back story, he possesses a lot of admirable abilities that I feel bear comparison to developers. Developer’s Life – Every Developer is a Superman I enjoyed comparing developers to Spiderman so much, that I have decided to continue the trend and encourage some of my favorite people (developers) with another favorite superhero – Superman.  Superman is probably the most famous superhero – and one of the most inspiring. Developer’s Life – Every Developer is a Spiderman I have to admit, Spiderman is my favorite superhero.  The most recent movie recently was released in theaters, so it has been at the front of my mind for some time. Spiderman was my favorite superhero even before the latest movie came out, but of course I took my whole family to see the movie as soon as I could!  Every one of us loved it, including my daughter.  We all left the movie thinking how great it would be to be Spiderman.  So, with that in mind, I started thinking about how we are like Spiderman in our everyday lives, especially developers. I would like to know which Superhero is your favorite hero! Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Developer Training – Various Options for Maximum Benefit – Part 4

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series, by now you are aware of all the pros and cons that can come along with training.  We’ve asked and answered hard questions, and investigated them “whys” and “hows” of training.  Now it is time to talk about all the different kinds of training that are out there! On Job Training The most common type of training is on the job training.  Everyone receives this kind of education – even experts who come in to consult have to be taught where the printer, pens, and copy machines are.  If you are thinking about more concrete topics, though, on the job training can be some of the easiest to come across.  Picture this: someone in the company whom you really admire is hard at work on a project.  You come up to them and ask to help them out – if they are a busy developer, the odds are that they will say “yes, please!”   If you phrase your question as an offer of help, you can receive training without ever putting someone in the awkward position of acting as a mentor.  However, some people may want the task of being a mentor.  It can never hurt to ask.  Most people will be more than willing to pass their knowledge along. Extreme Programming If your company and coworkers are willing, you can even investigate Extreme Programming.  This is a type of programming that allows small teams to quickly develop code and products that are released with almost immediate user feedback.  You can find more information at http://www.extremeprogramming.org/.  If this is something your company could use, suggest it to your supervisor.  Even if they say no, it will make it clear that you are a go-getter who is interested in new and exciting projects.  If the answer is yes, then you have the opportunity to get some of the best on the job training around. In Person Training Click on Image to Enlarge When you say the word “training,” most people’s minds go back to the classroom, an image they are familiar with.  While training doesn’t always have to be in a traditional setting, because it is so familiar it can also be the most valuable type of training.  There are many ways to get training through a live instructor.  Some companies may be willing to send a representative to you, where employees will get training, sometimes food and coffee, and a live instructor who can answer questions immediately.  Sometimes these trainers are also able to do consultations at the same time, which can invaluable to a company.  If you are the one to asks your supervisor for a training session that can also be turned into a consultation, you may stick in their minds as an incredibly dedicated employee.  If you can’t find a representative, local colleges can also be a good resource for free or cheap classes – or they may have representatives coming who are willing to take on a few more students. Benefits of On Demand Developer Training Of course, you can often get the best of all these types of training with online or On Demand training.  You can get the benefit of a live instructor who is willing to answer questions (although in this case, usually through e-mail or other online venues), there are often real-world examples to follow along – like on the job training – and best of all you can learn whenever you have the time or need.  Did a problem with your server come up at midnight when all your supervisors are safe at home and probably in bed?  No problem!  On Demand training is especially useful if you need to slow down, pause, or rewind a training session.  Not even a real-life instructor can do that! When I was writing this blog post, I felt that each of the subject, which I have covered can be blog posts of itself. However, I wanted to keep the the blog post concise and so touch based on three major training aspects 1) On Job Training 2) In Person Training and 3) Online training. Here is the question for you – is there any other kind of training methods available, which are effective and one should consider it? If yes, what are those, I may write a follow up blog post on the same subject next week. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • 8 Things You Can Do In Android’s Developer Options

    - by Chris Hoffman
    The Developer Options menu in Android is a hidden menu with a variety of advanced options. These options are intended for developers, but many of them will be interesting to geeks. You’ll have to perform a secret handshake to enable the Developer Options menu in the Settings screen, as it’s hidden from Android users by default. Follow the simple steps to quickly enable Developer Options. Enable USB Debugging “USB debugging” sounds like an option only an Android developer would need, but it’s probably the most widely used hidden option in Android. USB debugging allows applications on your computer to interface with your Android phone over the USB connection. This is required for a variety of advanced tricks, including rooting an Android phone, unlocking it, installing a custom ROM, or even using a desktop program that captures screenshots of your Android device’s screen. You can also use ADB commands to push and pull files between your device and your computer or create and restore complete local backups of your Android device without rooting. USB debugging can be a security concern, as it gives computers you plug your device into access to your phone. You could plug your device into a malicious USB charging port, which would try to compromise you. That’s why Android forces you to agree to a prompt every time you plug your device into a new computer with USB debugging enabled. Set a Desktop Backup Password If you use the above ADB trick to create local backups of your Android device over USB, you can protect them with a password with the Set a desktop backup password option here. This password encrypts your backups to secure them, so you won’t be able to access them if you forget the password. Disable or Speed Up Animations When you move between apps and screens in Android, you’re spending some of that time looking at animations and waiting for them to go away. You can disable these animations entirely by changing the Window animation scale, Transition animation scale, and Animator duration scale options here. If you like animations but just wish they were faster, you can speed them up. On a fast phone or tablet, this can make switching between apps nearly instant. If you thought your Android phone was speedy before, just try disabling animations and you’ll be surprised how much faster it can seem. Force-Enable FXAA For OpenGL Games If you have a high-end phone or tablet with great graphics performance and you play 3D games on it, there’s a way to make those games look even better. Just go to the Developer Options screen and enable the Force 4x MSAA option. This will force Android to use 4x multisample anti-aliasing in OpenGL ES 2.0 games and other apps. This requires more graphics power and will probably drain your battery a bit faster, but it will improve image quality in some games. This is a bit like force-enabling antialiasing using the NVIDIA Control Panel on a Windows gaming PC. See How Bad Task Killers Are We’ve written before about how task killers are worse than useless on Android. If you use a task killer, you’re just slowing down your system by throwing out cached data and forcing Android to load apps from system storage whenever you open them again. Don’t believe us? Enable the Don’t keep activities option on the Developer options screen and Android will force-close every app you use as soon as you exit it. Enable this app and use your phone normally for a few minutes — you’ll see just how harmful throwing out all that cached data is and how much it will slow down your phone. Don’t actually use this option unless you want to see how bad it is! It will make your phone perform much more slowly — there’s a reason Google has hidden these options away from average users who might accidentally change them. Fake Your GPS Location The Allow mock locations option allows you to set fake GPS locations, tricking Android into thinking you’re at a location where you actually aren’t. Use this option along with an app like Fake GPS location and you can trick your Android device and the apps running on it into thinking you’re at locations where you actually aren’t. How would this be useful? Well, you could fake a GPS check-in at a location without actually going there or confuse your friends in a location-tracking app by seemingly teleporting around the world. Stay Awake While Charging You can use Android’s Daydream Mode to display certain apps while charging your device. If you want to force Android to display a standard Android app that hasn’t been designed for Daydream Mode, you can enable the Stay awake option here. Android will keep your device’s screen on while charging and won’t turn it off. It’s like Daydream Mode, but can support any app and allows users to interact with them. Show Always-On-Top CPU Usage You can view CPU usage data by toggling the Show CPU usage option to On. This information will appear on top of whatever app you’re using. If you’re a Linux user, the three numbers on top probably look familiar — they represent the system load average. From left to right, the numbers represent your system load over the last one, five, and fifteen minutes. This isn’t the kind of thing you’d want enabled most of the time, but it can save you from having to install third-party floating CPU apps if you want to see CPU usage information for some reason. Most of the other options here will only be useful to developers debugging their Android apps. You shouldn’t start changing options you don’t understand. If you want to undo any of these changes, you can quickly erase all your custom options by sliding the switch at the top of the screen to Off.     

    Read the article

  • If You Could Cut Your Meeting Times in ½ Would You?

    - by Brian Dayton
                    I know it sounds like a big promise. And what I'm thinking about may not cut a :60 minute meeting into :30 minutes, but it could make meetings and interactions up to 2X more productive. How? Social Media for the Enterprise, Not Social Media In the Enterprise Bear with me. I'm not talking about whether or not workers should or shouldn't have access to Facebook on corporate networks. That topic has been discussed @ length. I'm also not talking about the direct benefits of Social Networking tools like Presence (the ability to see someone online and ask a question in real-time), blogs, RSS feeds or external tools like Twitter. The Un-Measurable Benefits Would you do something that you believe will have a positive effect--but can't be measured? It's impossible to quantify the effectiveness of a meeting. However, what I am talking about would be more of a byproduct of all of the social networking tools above. Here's the hypothesis: As I've gotten more and more busy with work, family, travel and kids--and the same has happened to my friends and family--I'm less and less connected. But by introducing Facebook to my life I've not only made connections with longtime friends whom I haven't spoken to in years--but I've increased the pace and quality of interactions, on and offline, with close friends who I see and speak to every week. In some cases it even enhances the connections and interactions with those I see or speak to every day. The same holds true in an organization. Especially a larger one with highly matrixed organizational structures. You work with people on a project, new people come in with each different project and a disproportionate amount of time is spent getting oriented and staying current. Going back to the initial value proposition--making meetings shorter/more effective--a large amount of time is spent: -          At Project Kick-off: Meeting and understanding team member's histories, goals & roles -          Ongoing: Summarizing events since the last meeting or update email In my personal, Facebook life today I know that: -          My best friend from college - has been stranded in India for 5 days because of the volcano in Iceland and is now only 250 miles from home -          One of my co-workers started conference calls at 6:30 this morning -          My wife wasn't terribly pleased with my painting skills in our new bathroom (disclosure: she told me this face to face too) Strengthening Weak Links A recent article in CIO Magazine, Three Dangerous Social Media Misconceptions (Kristen Burnham, March 12, 2010) calls out the #1 misconception as follows: 1. "Face-to-face relationships are far more valuable than virtual ones." While some level of physical interaction will always add value to relationships, Gartner says that come 2020, most relationships and teams will be based on "weak links"--that is, you may not have personally met a contact, but you'll know of or may have interacted with him via social sites like Facebook, LinkedIn and Twitter. The sooner your enterprise adopts these tools, the sooner your employees will learn them, and the sooner you'll begin to cultivate these relationships-of-the-future.   I personally believe that it's not an either/or choice between face-to-face and virtual interactions. In fact, I'll be as bold as saying it doesn't matter. I can point to two extremely valuable work relationships that I've had over the past 5 years: -          I shared an office with one of them -          I met the other person, face-to-face, only once Both relationships were very productive. The dynamics were similar. The communication tactics differed immensely. What does matter is the quality, frequency and relevance of interactions. Still sound like too much? An over-promise? Stay tuned for my next post The Gap Between Facebook and LinkedIn. I'll also connect some of the dots with where Oracle Applications and technologies are headed.        

    Read the article

  • Ubuntu 14.04:LTS , HPLIP loses USB connection to HP laserjet

    - by Gareth
    This is my first post, so please let me know if i have inadvertanly broken any rules. Problem There seems to be a problem with HPLIP and USB connections in ubuntu 14.04LTS. After upgrading i managed to get the printing to work but today it has broken. Initial Issue (Solved) After upgrading to unbutntu 14.04 LTS my printer lHP LaserJet 1018 stopped printing (code=12) Looking through the Forumsthere are several issues with printitng and HPLIP so I was able to troubleshoot this. The steps I took were : Reran HPdoctor Ran hp-check Un-installed and installed the latest version of HPLIP (3.14.4) Checked the USB connections lsusb and lsusb-v Re-ran hpcheck Removed the printer from HPLIP Re-ran hpcheck Manually configued HPLIP to the printer hp-setup-g <xxx:yyy> And this worked HPLIP was able to see the printer in the USB , test page printed and was happily working for a few weeks. Current Issue Printer Not working However today my wife complains the printer is not working and checking see that although HPLIP has the same error code and did not seem to be able to see the printer although running lsusb could see the printer. Initially thought this may be due to usb given a new bus/device after being turned on and off and went to repeat the steps above at the moment still seeing an error in that the HPLIP is complaining that it cannot see the device **error: Device not found. Please make sure your printer is properly connected and powered-on.** current Observations lsusb output ## Bus 002 Device 007: ID 03f0:4117 Hewlett-Packard LaserJet 1018 sudo hp-check output *> "duan@duan-Lenovo-B550:~$ sudo hp-check [sudo] password for duan: Saving output in log file: /home/duan/hp-check.log HP Linux Imaging and Printing System (ver. 3.14.4) Dependency/Version Check Utility ver. 15.1 Copyright (c) 2001-13 Hewlett-Packard Development Company, LP This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to distribute it under certain conditions. See COPYING file for more details. Note: hp-check can be run in three modes: 1. Compile-time check mode (-c or --compile): Use this mode before compiling the HPLIP supplied tarball (.tar.gz or .run) to determine if the proper dependencies are installed to successfully compile HPLIP. Run-time check mode (-r or --run): Use this mode to determine if a distro supplied package (.deb, .rpm, etc) or an already built HPLIP supplied tarball has the proper dependencies installed to successfully run. Both compile- and run-time check mode (-b or --both) (Default): This mode will check both of the above cases (both compile- and run-time dependencies). Full Output output of hp-setup -g 002:007 window box "device not found please make sure your printer is properly connected and powered on" duan@duan-Lenovo-B550:~$ sudo hp-setup -g 002:007 [sudo] password for duan: > HP Linux Imaging and Printing System (ver. 3.14.4) Printer/Fax Setup > Utility ver. 9.0 > > Copyright (c) 2001-13 Hewlett-Packard Development Company, LP This > software comes with ABSOLUTELY NO WARRANTY. This is free software, and > you are welcome to distribute it under certain conditions. See COPYING > file for more details. > > hp-setup[18461]: debug: param=002:007 hp-setup[18461]: debug: > selected_device_name=None Fontconfig error: > "/etc/fonts/conf.d/65-khmer.conf", line 14: out of memory Fontconfig > error: "/etc/fonts/conf.d/65-khmer.conf", line 23: out of memory > Fontconfig error: "/etc/fonts/conf.d/65-khmer.conf", line 32: out of > memory hp-setup[18461]: debug: Sys.argv=['/usr/bin/hp-setup', '-g', > '002:007'] printer_name=None param=002:007 jd_port=1 device_uri=None > remove=False Searching for device... hp-setup[18461]: debug: Trying > USB with bus=002 dev=007... hp-setup[18461]: debug: Not found. > hp-setup[18461]: debug: Trying serial number 002:007 hp-setup[18461]: > debug: Probing bus: usb hp-setup[18461]: debug: Probing bus: par > error: Device not found. Please make sure your printer is properly > connected and powered-on. hp-setup[18461]: debug: Starting GUI loop. .. USB lead Works with the Windows 7 laptop Printer Works with windows 7 laptop Questions Is this a Bug with HPLIP or an issue with laptop/printer? Supplementary question if it is a bug what information is needed and where should it be sent ? Any suggestions on how to get the printer to work correctly with Ubuntu 14.04LTS/HPLIP 13.4.3 so that it stays working ?

    Read the article

  • Fixing up Visual Studio&rsquo;s gitignore , using IFix

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/13/fixing-up-visual-studiorsquos-gitignore--using-ifix.aspxDownload tool Is there anything wrong with the built-in Visual Studio gitignore ???? Yes, there is !  First, some background: When you set up a git repo, it should be small and not contain anything not really needed.  One thing you should not have in your git repo is binary files. These binary files may come from two sources, one is the output files, in the bin and obj folders.  If you have a  gitignore file present, which you should always have (!!), these folders are excluded by the standard included file (the one included when you choose Team Explorer/Settings/GitIgnore – Add.) The other source are the packages folder coming from your NuGet setup.  You do use NuGet, right ?  Of course you do !  But, that gitignore file doesn’t have any exclude clause for those folders.  You have to add that manually.  (It will very probably be included in some upcoming update or release).  This is one thing that is missing from the built-in gitignore. To add those few lines is a no-brainer, you just include this: # NuGet Packages packages/* *.nupkg # Enable "build/" folder in the NuGet Packages folder since # NuGet packages use it for MSBuild targets. # This line needs to be after the ignore of the build folder # (and the packages folder if the line above has been uncommented) !packages/build/ Now, if you are like me, and you probably are, you add git repo’s faster than you can code, and you end up with a bunch of repo’s, and then start to wonder: Did I fix up those gitignore files, or did I forget it? The next thing you learn, for example by reading this blog post, is that the “standard” latest Visual Studio gitignore file exist at https://github.com/github/gitignore, and you locate it under the file name VisualStudio.gitignore.  Here you will find all the new stuff, for example, the exclusion of the roslyn ide folders was commited on May 24th.  So, you think, all is well, Visual Studio will use this file …..     I am very sorry, it won’t. Visual Studio comes with a gitignore file that is baked into the release, and that is by this time “very old”.  The one at github is the latest.  The included gitignore miss the exclusion of the nuget packages folder, it also miss a lot of new stuff, like the Roslyn stuff. So, how do you fix this ?  … note .. while we wait for the next version… You can manually update it for every single repo you create, which works, but it does get boring after a few times, doesn’t it ? IFix Enter IFix ,  install it from here. IFix is a command line utility (and the installer adds it to the system path, you might need to reboot), and one of the commands is gitignore If you run it from a directory, it will check and optionally fix all gitignores in all git repo’s in that folder or below.  So, start up by running it from your C:/<user>/source/repos folder. To run it in check mode – which will not change anything, just do a check: IFix  gitignore --check What it will do is to check if the gitignore file is present, and if it is, check if the packages folder has been excluded.  If you want to see those that are ok, add the --verbose command too.  The result may look like this: Fixing missing packages Let us fix a single repo by adding the missing packages structure,  using IFix --fix We first check, then fix, then check again to verify that the gitignore is correct, and that the “packages/” part has been added. If we open up the .gitignore, we see that the block shown below has been added to the end of the .gitignore file.   Comparing and fixing with latest standard Visual Studio gitignore (from github) Now, this tells you if you miss the nuget packages folder, but what about the latest gitignore from github ? You can check for this too, just add the option –merge (why this is named so will be clear later down) So, IFix gitignore --check –merge The result may come out like this  (sorry no colors, not got that far yet here): As you can see, one repo has the latest gitignore (test1), the others are missing either 57 or 150 lines.  IFix has three ways to fix this: --add --merge --replace The options work as follows: Add:  Used to add standard gitignore in the cases where a .gitignore file is missing, and only that, that means it won’t touch other existing gitignores. Merge: Used to merge in the missing lines from the standard into the gitignore file.  If gitignore file is missing, the whole standard will be added. Replace: Used to force a complete replacement of the existing gitignore with the standard one. The Add and Replace options can be used without Fix, which means they will actually do the action. If you combine with --check it will otherwise not touch any files, just do a verification.  So a Merge Check will  tell you if there is any difference between the local gitignore and the standard gitignore, a Compare in effect. When you do a Fix Merge it will combine the local gitignore with the standard, and add what is missing to the end of the local gitignore. It may mean some things may be doubled up if they are spelled a bit differently.  You might also see some extra comments added, but they do no harm. Init new repo with standard gitignore One cool thing is that with a new repo, or a repo that is missing its gitignore, you can grab the latest standard just by using either the Add or the Replace command, both will in effect do the same in this case. So, IFix gitignore --add will add it in, as in the complete example below, where we set up a new git repo and add in the latest standard gitignore: Notes The project is open sourced at github, and you can also report issues there.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >