Search Results

Search found 483 results on 20 pages for 'dangerous'.

Page 5/20 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • Prevent gnome from automatically mounting partition when clicked in nautilus

    - by bjarkef
    Hi, I have two partitions on a hard drive in my machine that are formatted as ntfs, but must under no circumstance be mounted by my Ubuntu installation (unless I do some preparation first). However nautilus happily displays the partitions, and a single click will mount them automatically. This is very dangerous behaviour, how can I hide the partitions from nautilus and prevent accidentally mounting them by a single stray mouse click? Thanks

    Read the article

  • MSTSC RDP over the public internet

    - by stuart Brand
    My first question so please be gentle :) I have a client who is insisting that they have to let their third party vendor support access to there server directly from the internet via RDP. Our policy does not allow direct access to the infrastructure from outside of the data centre for administration except from an approved VPN connection and then virtual desktop there on to the servers. I am now in the situation where I must give good reasons why it is dangerous to use RDP over the public internet. any help would be appreciated Thanks in advance Stuart

    Read the article

  • NTFS drivers on Linux

    - by Jack
    Hopefully this is programming related. Many people have reported that the NTFS-3G driver works perfectly for writing to NTFS drives without any problems. If NTFS has been successfully reverse engineered to a useful degree, then why is the kernel driver still only read-only, with write support being very dangerous.....just as it was 5 years agi?

    Read the article

  • JS Worm : how to find the entry point

    - by Cédric Girard
    Hi, my site is tagged as dangerous by Google / StopBadware.org, and I found this in severals js/html files : <script type="text/javascript" src="http://oployau.fancountblogger.com:8080/Gigahertz.js"></script> <!--a0e2c33acd6c12bdc9e3f3ba50c98197--> I cleaned severals files, I restore a backup but how to understand how the worm had been installed? What can I look for in log files? This server, a Centos 5, is only used as an apache server, with ours programs, a tikiwiki, a drupal installed. Thanks Cédric

    Read the article

  • Windows Home 7 and virtualization

    - by progtick
    so apparently, you can't use VirtualBox etc with Windows Home 7. because you would be using two licenses instead of one. So other users that tried virtualization with Windows Home Premium 7, did you just end up using another OS like Ubuntu etc? Did you find a workaround for using Windows? When I say virtualization, I mean virtual machine of sort - where dangerous websites can be visited, nasty applications can be tried etc.

    Read the article

  • Make logwatch reports more interesting?

    - by Alexander Shcheblikin
    Is it possible to improve the quality of reports from logwatch? Like make it not just report disk usage which doesn't even change much in daily operation, but report significant changes in usage or approaching critical capacity levels? If I cannot do that with logwatch and instead have to write custom scripts to produce such reports, logwatch appears to be pretty useless, or even dangerous, as many users reportedly grow to ignore emails from it knowing they are so boring.

    Read the article

  • Looking for a Windows 7 compatible Partition Manager!

    - by NoCanDo
    Since Acronis Disk Director Suite 10.0 is no longer compatible with Windows 7, I'm in need of an alternative. I'm not interested in Acronis Disk Director Suite workarounds, particularly not an application as dangerous as a partition manager. I don't want to mess it up my partitions due to incompatibality. Anyone got any ideas? Open Source, Commercial or Freeware, doesn't mater!

    Read the article

  • How to prevent bad queries from breaking replication?

    - by nulll
    For my personal experience, mysql replication is fragile. I know that there area many things not to do beacuse they could break replication, but we are humans and the error could always occur. So I was thinking... is there a way to enforce mysql replication? Something that prevents queries that are dangerous for the replication to be runt? In other words I'm searching for something that saves replication even if I accidentally run evil queries.

    Read the article

  • Using a 20V power block on a 19V notebook

    - by user4444
    Is that dangerous : for the computer (without the battery) for the cells If possible, explain why. Edit : Here are some more assumptions : Without the battery included, there is no risk of overheating the cell, or over charging them. But there is still some dc to dc conversion taking place on the motherboard. I assume this dc to dc stage to be quite tolerant. What kind of trouble can I run into when using 20V instead of 19V ? Overheating ?

    Read the article

  • Using a 20V power block on a 19V notebook

    - by user4444
    Is that dangerous : for the computer (without the battery) for the cells If possible, explain why. Edit : Here are some more assumptions : Without the battery included, there is no risk of overheating the cell, or over charging them. But there is still some dc to dc conversion taking place on the motherboard. I assume this dc to dc stage to be quite tolerant. What kind of trouble can I run into when using 20V instead of 19V ? Overheating ?

    Read the article

  • How to destroy a CD/DVD rom safely?

    - by HaLaBi
    I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What's the way to physically destroy a CD/DVD safely?

    Read the article

  • Amazon EC2, still cant ping or "http" it

    - by DarkFire21
    I am new at Amazon Cloud technologies. I ve set up an Amazon Linux instance created my keys and assigned elastic IP. Also, I opened all TCP, UDP, ICMP ports(ok, it's very dangerous, but I am using it for test purposes). I ve also installed Apache server and enabled it. But still cant ping or access my instance via IP. Any ideas? EDIT: Please see a screenshot of the security groups settings. All ports are open... Check this out

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • Ever helpful Windows&hellip;

    - by John Breakwell
    I’m doing some troubleshooting for a relative and asked them to send me a zipped copy of their registry which they dutifully did. When I tried to extract the registry file, though, Windows jumped in the way and said “No”. This made sense as registry files are dangerous things in the hands of the ignorant. So I clicked the link to see if it would tell me how to get at the reg file but found the result less than helpful. So off to the Internet and found an excellent answer on how to get round this: Now on to the much harder part of fixing the original problems.

    Read the article

  • (and a new ray smith equipment webpage)

    - by raysmithequip
    Originally posted on: http://geekswithblogs.net/raysmithequip/archive/2013/10/15/154351.aspxPlease bear with me, apparently we lost jabry.com to what I am not sure.  I have yet another webpage coming to http://www.raysmithequip.netai.net/ . Right now it is pretty bare, I just spent an hour configuring web matrix 2.0 (3 no likey like vista!!).  I should have the shoppers corner sub page back up intime for black friday though, soo keep your eyes posted.To keep you busy meantime, be sure to check out inmoov, a really cool open source 3d printed diy robot.I chanced upon it from the dangerous prototypes web site some time ago and consider it the one project that will rock the world in the immediate future.inmoov.blogspot.com/ raysmithn3twu

    Read the article

  • 8 Deadly Commands You Should Never Run on Linux

    - by Chris Hoffman
    Linux’s terminal commands are powerful, and Linux won’t ask you for confirmation if you run a command that won’t break your system. It’s not uncommon to see trolls online recommending new Linux users run these commands as a joke. Learning the commands you shouldn’t run can help protect you from trolls while increasing your understanding of how Linux works. This isn’t an exhaustive guide, and the commands here can be remixed in a variety of ways. Note that many of these commands will only be dangerous if they’re prefixed with sudo on Ubuntu – they won’t work otherwise. On other Linux distributions, most commands must be run as root. Image Credit: Skull and Crossbones remixed from Jason Ford on Twitter How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • The perils of double-dash comments [T-SQL]

    - by jamiet
    I was checking my Twitter feed on my way in to work this morning and was alerted to an interesting blog post by Valentino Vranken that highlights a problem regarding the OLE DB Source in SSIS. In short, using double-dash comments in SQL statements within the OLE DB Source can cause unexpected results. It really is quite an important read if you’re developing SSIS packages so head over to SSIS OLE DB Source, Parameters And Comments: A Dangerous Mix! and be educated. Note that the problem is solved in SSIS2012 and Valentino explains exactly why. If reading Valentino’s post has switched your brain into “learn mode” perhaps also check out my post SSIS: SELECT *... or select from a dropdown in an OLE DB Source component? which highlights another issue to be aware of when using the OLE DB Source. As I was reading Valentino’s post I was reminded of a slidedeck by Chris Adkin entitled T-SQL Coding Guidelines where he recommends never using double-dash comments: That’s good advice! @Jamiet

    Read the article

  • Can HTML injection be a security issue?

    - by tkbx
    I recently came across a website that generates a random adjective, surrounded by a prefix and suffix entered by the user. For example, if the user enters "123" for prefix, and "789" for suffix, it might generate "123Productive789". I've been screwing around with it, and I thought I might try something out: I entered this into the prefix field: <a href="javascript:window.close();">Click</a><hr /> And, sure enough, I was given the link, then an <hr>, then a random adjective. What I'm wondering is, could this be dangerous? There must be many more websites out there that have this issue, are all of them vulnerable to some sort of php injection?

    Read the article

  • Does the deprecation of mysql_* functions in PHP carry over to other Databases(MSSQL)?

    - by MobyD
    I'm not talking about MySQL, I'm talking about Microsoft SQL Server I've been aware of PDO for quite some time now, standard mysql functions are dangerous and should be avoided. http://php.net/manual/en/function.mysql-connect.php But what about the MSSQL function in PHP? They are, for most purposes, identical sets of functions, but the PHP page describing mssql_* carries no warning of deprecation. http://us.php.net/manual/en/function.mssql-connect.php There are PDO drivers available for MSSQL, but they aren't quite as readily available or used as the MySQL drivers. Ideally, it looks to me like I should get them working and move from mssql_* to PDO like I have with MySQL, but is it as big of a priority? Is there some hidden safety to MSSQL that means it's exempt from all of the mysql_* hatred as of late? Or is its obscurity as a backend the only reason there hasn't been more PDO encouragement?

    Read the article

  • Hack a Nintendo Zapper into a Real Life Laser Blaster

    - by Jason Fitzpatrick
    Why settle for zapping ducks on the screen when you could be popping balloons and lighting matches on fire? This awesome (but rather dangerous) hack turns an old Nintendo zapper into a legitimate laser gun. Courtesy of the tinkers over at North Street Labs, we’re treated to a Nintendo zapper overhaul that replaces the guts with a powerful 2W blue laser, a battery pack, and a keyed safety switch. Check out the video below to see the laser blaster in action: For more information on the build and a pile of more-than-merited safety warnings, hit up the link below. Nintendo Zapper 2W+ Laser [via Boing Boing] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >