Search Results

Search found 4126 results on 166 pages for 'bitwise operations'.

Page 16/166 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Is this kind of design - a class for Operations On Object - correct?

    - by Mithir
    In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • What would make a noise in a PC on graphics operations on a passively-cooled system?

    - by T.J. Crowder
    I have this system based on the Intel D510MO motherboard, which is basically an Atom D510 (dual-core HT Atom w/built-in GPU), an Intel NM10 chipset, and a Realtek Gigabit LAN controller. It's entirely passively cooled. I noticed almost immediately that there was a kind of very, very soft noise that corresponded with graphics operations, sort of the noise you'd get if you had a sheet of flat paper and slid something really light across it — but more electronic than that. I wrote it off as observation error and/or disk activity triggered by the graphics operation (although the latter seemed like a lot of unnecessary disk activity). It isn't. I got curious enough that I finally did a few controlled experiments, and here's what I've determined: It isn't the HDD. For one thing, the sounds the HDD makes (when seeking, when reading or writing, when just sitting there spinning) is different. For another, I used sudo hdparm -y /dev/sda (I'm using Ubuntu 10.04 LTS) to temporarily put the disk on standby while making sure that non-disk graphics op was happening in a loop. The disk spun down, but the other sound continued, corresponding perfectly with the timing of the graphics op. (Then the disk spun up again, but it takes long enough that I could rule out the HDD.) It isn't the monitor; I ensured the two were well physically-separated and the sound was definitely coming from the main box. It isn't something else in the room; the sound is coming from the box. It isn't cross-talk to an audio circuit coming out the speakers. (It doesn't have any speakers.) It isn't my mouse (e.g., when I'm trying to make graphics ops happen); the sound happens if I set up a recurring operation and don't use the mouse at all, or if I lift the mouse off the table slightly (but enough that the laser still registers movement). It isn't the voices in my head; they never whisper like that. Other observations: It doesn't seem to matter what the graphics operation is; anything that changes what's on the screen seems to do it. I get the sound when moving the mouse over the Chromium tab bar (which makes the tab backgrounds change); I get it when a web page has a counter on it that changes the text on the page: I get it when dragging window contents around. The sound is very, very slightly louder if the graphics op is larger, like scrolling a text area when writing a question on superuser.com, than for smaller operations like the tick counter on the web page. But it's very slight. It's fairly loud (and of good duration) when the op involves color changes to substantial surface areas. For instance, when asking a question here on superuser and you move the cursor between the question box and the tag box, and the help to the right fades out, changes, and fades back in. (Yet another example related to the web browser, so let me say: I hear it when operations completely unrelated to the web browser as well.) It doesn't sound like arcing or anything like that (I'd've shut off the machine Right Quick Like if it did). Moving windows does it. Scrolling windows (by and large) doesn't. I have the feeling I've heard this sort of thing before, when all system fans were on low and such, with other systems — but (again) written it off as observational error. For all the world it's like I'm hearing the CPU working (as opposed to the GPU; note the window scroll thing above) or data being transferred somewhere, but that just seems...unlikely. So what am I hearing? This may seem like a very localized question, but perhaps other silent PC enthusiasts may be interested as well...

    Read the article

  • What's the most effective way to interpolate between two colors? (pseudocode and bitwise ops expecte

    - by navand
    Making a Blackberry app, want a Gradient class. What's the most effective way (as in, speed and battery life) to interpolate two colors? Please be specific. // Java, of course int c1 = 0xFFAA0055 // color 1, ARGB int c2 = 0xFF00CCFF // color 2, ARGB float st = 0 // the current step in the interpolation, between 0 and 1 /* Help from here on. Should I separate each channel of each color, convert them to decimal and interpolate? Is there a simpler way? interpolatedChannel = red1+((red2-red1)*st) interpolatedChannel = interpolatedChannel.toString(16) ^ Is this the right thing to do? If speed and effectiveness is important in a mobile app, should I use bitwise operations? Help me! */

    Read the article

  • Inside the Concurrent Collections: ConcurrentBag

    - by Simon Cooper
    Unlike the other concurrent collections, ConcurrentBag does not really have a non-concurrent analogy. As stated in the MSDN documentation, ConcurrentBag is optimised for the situation where the same thread is both producing and consuming items from the collection. We'll see how this is the case as we take a closer look. Again, I recommend you have ConcurrentBag open in a decompiler for reference. Thread Statics ConcurrentBag makes heavy use of thread statics - static variables marked with ThreadStaticAttribute. This is a special attribute that instructs the CLR to scope any values assigned to or read from the variable to the executing thread, not globally within the AppDomain. This means that if two different threads assign two different values to the same thread static variable, one value will not overwrite the other, and each thread will see the value they assigned to the variable, separately to any other thread. This is a very useful function that allows for ConcurrentBag's concurrency properties. You can think of a thread static variable: [ThreadStatic] private static int m_Value; as doing the same as: private static Dictionary<Thread, int> m_Values; where the executing thread's identity is used to automatically set and retrieve the corresponding value in the dictionary. In .NET 4, this usage of ThreadStaticAttribute is encapsulated in the ThreadLocal class. Lists of lists ConcurrentBag, at its core, operates as a linked list of linked lists: Each outer list node is an instance of ThreadLocalList, and each inner list node is an instance of Node. Each outer ThreadLocalList is owned by a particular thread, accessible through the thread local m_locals variable: private ThreadLocal<ThreadLocalList<T>> m_locals It is important to note that, although the m_locals variable is thread-local, that only applies to accesses through that variable. The objects referenced by the thread (each instance of the ThreadLocalList object) are normal heap objects that are not specific to any thread. Thinking back to the Dictionary analogy above, if each value stored in the dictionary could be accessed by other means, then any thread could access the value belonging to other threads using that mechanism. Only reads and writes to the variable defined as thread-local are re-routed by the CLR according to the executing thread's identity. So, although m_locals is defined as thread-local, the m_headList, m_nextList and m_tailList variables aren't. This means that any thread can access all the thread local lists in the collection by doing a linear search through the outer linked list defined by these variables. Adding items So, onto the collection operations. First, adding items. This one's pretty simple. If the current thread doesn't already own an instance of ThreadLocalList, then one is created (or, if there are lists owned by threads that have stopped, it takes control of one of those). Then the item is added to the head of that thread's list. That's it. Don't worry, it'll get more complicated when we account for the other operations on the list! Taking & Peeking items This is where it gets tricky. If the current thread's list has items in it, then it peeks or removes the head item (not the tail item) from the local list and returns that. However, if the local list is empty, it has to go and steal another item from another list, belonging to a different thread. It iterates through all the thread local lists in the collection using the m_headList and m_nextList variables until it finds one that has items in it, and it steals one item from that list. Up to this point, the two threads had been operating completely independently. To steal an item from another thread's list, the stealing thread has to do it in such a way as to not step on the owning thread's toes. Recall how adding and removing items both operate on the head of the thread's linked list? That gives us an easy way out - a thread trying to steal items from another thread can pop in round the back of another thread's list using the m_tail variable, and steal an item from the back without the owning thread knowing anything about it. The owning thread can carry on completely independently, unaware that one of its items has been nicked. However, this only works when there are at least 3 items in the list, as that guarantees there will be at least one node between the owning thread performing operations on the list head and the thread stealing items from the tail - there's no chance of the two threads operating on the same node at the same time and causing a race condition. If there's less than three items in the list, then there does need to be some synchronization between the two threads. In this case, the lock on the ThreadLocalList object is used to mediate access to a thread's list when there's the possibility of contention. Thread synchronization In ConcurrentBag, this is done using several mechanisms: Operations performed by the owner thread only take out the lock when there are less than three items in the collection. With three or greater items, there won't be any conflict with a stealing thread operating on the tail of the list. If a lock isn't taken out, the owning thread sets the list's m_currentOp variable to a non-zero value for the duration of the operation. This indicates to all other threads that there is a non-locked operation currently occuring on that list. The stealing thread always takes out the lock, to prevent two threads trying to steal from the same list at the same time. After taking out the lock, the stealing thread spinwaits until m_currentOp has been set to zero before actually performing the steal. This ensures there won't be a conflict with the owning thread when the number of items in the list is on the 2-3 item borderline. If any add or remove operations are started in the meantime, and the list is below 3 items, those operations try to take out the list's lock and are blocked until the stealing thread has finished. This allows a thread to steal an item from another thread's list without corrupting it. What about synchronization in the collection as a whole? Collection synchronization Any thread that operates on the collection's global structure (accessing anything outside the thread local lists) has to take out the collection's global lock - m_globalListsLock. This single lock is sufficient when adding a new thread local list, as the items inside each thread's list are unaffected. However, what about operations (such as Count or ToArray) that need to access every item in the collection? In order to ensure a consistent view, all operations on the collection are stopped while the count or ToArray is performed. This is done by freezing the bag at the start, performing the global operation, and unfreezing at the end: The global lock is taken out, to prevent structural alterations to the collection. m_needSync is set to true. This notifies all the threads that they need to take out their list's lock irregardless of what operation they're doing. All the list locks are taken out in order. This blocks all locking operations on the lists. The freezing thread waits for all current lockless operations to finish by spinwaiting on each m_currentOp field. The global operation can then be performed while the bag is frozen, but no other operations can take place at the same time, as all other threads are blocked on a list's lock. Then, once the global operation has finished, the locks are released, m_needSync is unset, and normal concurrent operation resumes. Concurrent principles That's the essence of how ConcurrentBag operates. Each thread operates independently on its own local list, except when they have to steal items from another list. When stealing, only the stealing thread is forced to take out the lock; the owning thread only has to when there is the possibility of contention. And a global lock controls accesses to the structure of the collection outside the thread lists. Operations affecting the entire collection take out all locks in the collection to freeze the contents at a single point in time. So, what principles can we extract here? Threads operate independently Thread-static variables and ThreadLocal makes this easy. Threads operate entirely concurrently on their own structures; only when they need to grab data from another thread is there any thread contention. Minimised lock-taking Even when two threads need to operate on the same data structures (one thread stealing from another), they do so in such a way such that the probability of actually blocking on a lock is minimised; the owning thread always operates on the head of the list, and the stealing thread always operates on the tail. Management of lockless operations Any operations that don't take out a lock still have a 'hook' to force them to lock when necessary. This allows all operations on the collection to be stopped temporarily while a global snapshot is taken. Hopefully, such operations will be short-lived and infrequent. That's all the concurrent collections covered. I hope you've found it as informative and interesting as I have. Next, I'll be taking a closer look at ThreadLocal, which I came across while analyzing ConcurrentBag. As you'll see, the operation of this class deserves a much closer look.

    Read the article

  • C# Enum Flags Comparison

    - by destructo_gold
    Given the following flags, [Flags] public enum Operations { add = 1, subtract = 2, multiply = 4, divide = 8, eval = 16, } How could I implement an IF condition to perform each operation? In my attempt, the first condition is true for add, eval, which is correct. However the first condition is also true for subtract, eval, which is incorrect. public double Evaluate(double input) { if ((operation & (Operations.add & Operations.eval)) == (Operations.add & Operations.eval)) currentResult += input; else if ((operation & (Operations.subtract & Operations.eval)) == (Operations.subtract & Operations.eval)) currentResult -= input; else currentResult = input; operation = null; return currentResult; } I cannot see what the problem is.

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 2: Perform CRUD Operations Using the Entity Framework

    In this article, Vince demonstrates the usage of the Entity Framework 4 to create, read, update, and delete records in the database which was created in Part 1 of this series. After a short introduction, he discusses the various step involved in the modification of the database, creation of a web form, the selection records to load a drop down list, and the adding, updating, deletion and retrieval of records from the database with the help of relevant source code and screen shots.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASP.NET 4.0 and the Entity Framework 4 - Part 2: Perform CRUD Operations Using the Entity Framework

    In this article, Vince demonstrates the usage of the Entity Framework 4 to create, read, update, and delete records in the database which was created in Part 1 of this series. After a short introduction, he discusses the various step involved in the modification of the database, creation of a web form, the selection records to load a drop down list, and the adding, updating, deletion and retrieval of records from the database with the help of relevant source code and screen shots.

    Read the article

  • Lightbeam : Mozilla sort une extension permettant de savoir qui vous piste sur Internet et de suivre en temps réel le « tracking » de vos opérations

    Lightbeam : Mozilla sort une extension permettant de savoir qui vous piste sur Internet et de suivre en temps réel le « tracking » de vos opérationsLes internautes sont de plus en plus inquiets pour la sécurité de leur vie privée sur Internet. Les efforts des éditeurs de navigateurs et du W3C pour mettre sur pied une norme (le projet Do-Not-Track) pour permettre aux internautes d'autoriser ou non le « tracking » de leur activité sur le Web évolue lentement, avec d'un coté les annonceurs qui menacent...

    Read the article

  • Ceská obchodní banka, a.s. Upgrades to Oracle Database 11g On Time, On Budget and without Disrupting Business Operations

    - by jgelhaus
    You want the new features of the latest release, but upgrading a database is one of those things DBAs can "lose sleep" over.  Ceská obchodní banka, a.s."CSOB" needed to upgrade its production systems in the Czech Republic and Slovakia that supported 90 key applications for its retail, corporate, internet, and ATM services from Oracle Database 9i to Oracle Database 11g with simultaneous migration from Alpha processors/OpenVMS-based hardware to a Power7, AIX system. Oracle Consulting helped to complete the upgrade within schedule and budget, while meeting tight restrictions on downtime. Knowledge transfer by Oracle Consulting to the bank’s IT team has improved self-sufficiency in support and maintenance while the technical and advisory services of Oracle Consulting Expert Services continue to optimize performance and availability while lowering cost of ownership. Read how CSOB maximized the value of its investment in Oracle Database technology with an upgrade to Oracle Database 11g.

    Read the article

  • How to had operation with character/items on binary with concrete operations on C++?

    - by Piperoman
    I have the next problem. A item can had a lot of states: NORMAL = 0000000 DRY = 0000001 HOT = 0000010 BURNING = 0000100 WET = 0001000 COLD = 0010000 FROZEN = 0100000 POISONED= 1000000 A item can had some states at same time but not all of them Is impossible to be dry and wet at same time. If you COLD a WET item, it turns into FROZEN. If you HOT a WET item, it turns into NORMAL A item can be BURNING and POISON Etc. I have try to set binary flags to states, and use AND to set operation to combine different states, checking before if is possible or not to do it, or change to another status. Exist a concrete patron to solve this problem efficiently without had a interminable switch that check every states with everynew states? It is relative easy to check 2 different states, but if exist a third state it is not trivial to do.

    Read the article

  • Do you gain any operations when you constrain a generic type using where T : struct?

    - by Fiona Holder
    This may be a bit of an abstract question, so apologies in advance. I am looking into generics in .NET, and was wondering about the where T : struct constraint. I understand that this allows you to restrict the type used to be a value type. My question is, without any type constraint, you can do a limited number of operations on T. Do you gain the ability to use any additional operations when you specify where T : struct, or is the only value in restricting the types you can pass in?

    Read the article

  • How do I perform multi-window operations on a non-combined group of windows in Windows 7?

    - by BACON
    With multiple windows/instances of an application open and the taskbar buttons set to "Always combine, hide labels", I can Shift + right-click the taskbar button for the window group to open a menu allowing me to "Cascade", "Show windows stacked", "Show windows side by side", "Restore all windows", "Minimize all windows", or "Close all windows". With the taskbar buttons set to "Combine when taskbar is full" or "Never combine", when I right-click, Shift + right-click, or Ctrl + right-click either the button or the Aero preview for a window in the group I get a menu allowing me to perform window operations on just that one window rather than each window in the group. When I have a non-combined group of windows in the taskbar, how would I cascade, stack, etc. that group of windows?

    Read the article

  • Multiple Operations with soapAction="" in a WCF Service Contract?

    - by John Saunders
    I need to create a service that will be "called back" by a third party. As a result, I need to conform to their WSDL. Their WSDL has all of the operations defined with soapAction="", so my service needs to do the same. Unfortunately, I'm getting the error: The operations A and B have the same action (). Every operation must have a unique action value. In ASMX web services, there was a mode where the soapAction would not be used, but the name of the request element would be used instead. Is there some way using WCF not only to dispatch on the request element, but also to emit a WSDL with no soapAction?

    Read the article

  • Why use SyncLocks in .NET for simple operations when Interlocked class is available?

    - by rwmnau
    I've been doing simple multi-threading in VB.NET for a while, and have just gotten into my first large multi-threaded project. I've always done everything using the Synclock statement because I didn't think there was a better way. I just learned about the Interlocked Class - it makes it look as though all this: Private SomeInt as Integer Private SomeInt_LockObject as New Object Public Sub IntrementSomeInt Synclock SomeInt_LockObject SomeInt += 1 End Synclock End Sub Can be replaced with a single statement: Interlocked.Increment(SomeInt) This handles all the locking internally and modifies the number. This would be much simpler than writing my own locks for simple operations (longer-running or more complicated operations obviously still need their own locking). Is there a reason why I'd rolling my own locking, using dedicated locking objects, when I can accomplish the same thing using the Interlocked methods?

    Read the article

  • Are all the system's floating points operations the same?

    - by Jj
    We're making this web app in PHP and when working in the reports we have Excel files to compare our results to make sure our coding is doing the right operations. Now we're running into some differences due floating point arithmetics. We're doing the same divisions and multiplications and running into slightly different numbers, that add up to a notable difference. My question is if Excel is delegating it's floating point arithmetic to the CPU and PHP is also relying in the CPU for it's operations. Or does each application implements its own set of math algorithms?

    Read the article

  • How would I implement separate databases for reading and writing operations?

    - by Matt
    I am interested in implementing an architecture that has two databases one for read operations and the other for writes. I have never implemented something like this and have always built single database, highly normalised systems so I am not quite sure where to begin. I have a few parts to this question. 1. What would be a good resource to find out more about this achitecture? 2. Is it just a question of replicating between two identical schemas, or would your schemas differ depending on the operations, would normalisation vary too? 3. How do you insure that data written to one database is immediately available for reading from the second? Any further help, tips, resources would be appreciated. Thanks.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • Why does Module::Build's testcover gives me "use of uninitialized value" warnings?

    - by Kurt W. Leucht
    I'm kinda new to Module::Build, so maybe I did something wrong. Am I the only one who gets warnings when I change my dispatch from "test" to "testcover"? Is there a bug in Devel::Cover? Is there a bug in Module::Build? I probably just did something wrong. I'm using ActiveState Perl v5.10.0 with Module::Build version 0.31012 and Devel::Cover 0.64 and Eclipse 3.4.1 with EPIC 0.6.34 for my IDE. UPDATE: I upgraded to Module::Build 0.34 and the warnings are still output. *UPDATE: Looks like a bug in B::Deparse. Hope it gets fixed someday.* Here's my unit test build file: use strict; use warnings; use Module::Build; my $build = Module::Build->resume ( properties => { config_dir => '_build', }, ); $build->dispatch('test'); When I run this unit test build file, I get the following output: t\MyLib1.......ok t\MyLib2.......ok t\MyLib3.......ok All tests successful. Files=3, Tests=24, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU) But when I change the dispatch line to 'testcover' I get the following output which always includes a bunch of "use of uninitialized value in bitwise and" warning messages: Deleting database D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db t\MyLib1.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. t\MyLib2.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. t\MyLib3.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. All tests successful. Files=3, Tests=24, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU) Reading database from D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db ---------------------------- ------ ------ ------ ------ ------ ------ ------ File stmt bran cond sub pod time total ---------------------------- ------ ------ ------ ------ ------ ------ ------ .../lib/ActivePerl/Config.pm 0.0 0.0 0.0 0.0 0.0 n/a 0.0 ...l/lib/ActiveState/Path.pm 0.0 0.0 0.0 0.0 100.0 n/a 4.8 <SNIP> blib/lib/<SNIP>/MyLib2.pm 100.0 90.0 n/a 100.0 100.0 0.0 98.5 blib/lib/<SNIP>/MyLib3.pm 100.0 90.9 100.0 100.0 100.0 0.6 98.0 Total 14.4 6.7 3.8 18.3 20.0 100.0 11.6 ---------------------------- ------ ------ ------ ------ ------ ------ ------ Writing HTML output to D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db/coverage.html ... done.

    Read the article

  • What determines which Javascript functions are blocking vs non-blocking?

    - by Sean
    I have been doing web-based Javascript (vanilla JS, jQuery, Backbone, etc.) for a few years now, and recently I've been doing some work with Node.js. It took me a while to get the hang of "non-blocking" programming, but I've now gotten used to using callbacks for IO operations and whatnot. I understand that Javascript is single-threaded by nature. I understand the concept of the Node "event queue". What I DON'T understand is what determines whether an individual javascript operation is "blocking" vs. "non-blocking". How do I know which operations I can depend on to produce an output synchronously for me to use in later code, and which ones I'll need to pass callbacks to so I can process the output after the initial operation has completed? Is there a list of Javascript functions somewhere that are asynchronous/non-blocking, and a list of ones that are synchronous/blocking? What is preventing my Javascript app from being one giant race condition? I know that operations that take a long time, like IO operations in Node and AJAX operations on the web, require them to be asynchronous and therefore use callbacks - but who is determining what qualifies as "a long time"? Is there some sort of trigger within these operations that removes them from the normal "event queue"? If not, what makes them different from simple operations like assigning values to variables or looping through arrays, which it seems we can depend on to finish in a synchronous manner? Perhaps I'm not even thinking of this correctly - hoping someone can set me straight. Thanks!

    Read the article

  • How to proceed jpeg Image file size after read--rotate-write operations in Java?

    - by zamska
    Im trying to read a JPEG image as BufferedImage, rotate and save it as another jpeg image from file system. But there is a problem : after these operations I cannot proceed same file size. Here the code //read Image BufferedImage img = ImageIO.read(new File(path)); //rotate Image BufferedImage rotatedImage = new BufferedImage(image.getHeight(), image.getWidth(), BufferedImage.TYPE_3BYTE_BGR); Graphics2D g2d = (Graphics2D) rotatedImage.getGraphics(); g2d.rotate(Math.toRadians(PhotoConstants.ROTATE_LEFT)); int height=-rotatedImage.getHeight(null); g2d.drawImage(image, height, 0, null); g2d.dispose(); //Write Image Iterator iter = ImageIO.getImageWritersByFormatName("jpeg"); ImageWriter writer = (ImageWriter)iter.next(); // instantiate an ImageWriteParam object with default compression options ImageWriteParam iwp = writer.getDefaultWriteParam(); try { FileImageOutputStream output = null; iwp.setCompressionMode(ImageWriteParam.MODE_EXPLICIT); iwp.setCompressionQuality(0.98f); // an integer between 0 and 1 // 1 specifies minimum compression and maximum quality File file = new File(path); output = new FileImageOutputStream(file); writer.setOutput(output); IIOImage iioImage = new IIOImage(image, null, null); writer.write(null, iioImage, iwp); output.flush(); output.close(); writer.dispose(); Is it possible to access compressionQuality parameter of original jpeg image in the beginning. when I set 1 to compression quality, the image gets bigger size. Otherwise I set 0.9 or less the image gets smaller size. How can i proceed the image size after these operations? Thank you,

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >