Search Results

Search found 20283 results on 812 pages for 'security context'.

Page 433/812 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Moving My Blog

    - by Hirt
    Oracle has, unfortunately, moved to a new blogging platform. For security reasons, it is no longer possible to use external tools, such as Windows Live Writer. Since this makes it too time consuming for me to blog, I've decided to only use my private blog, even for work related blog entries. This is where you can find my blog entries, from now on:http://hirt.se/blog/ Note that it is hosted on the poor server in my garage, hooked up to an ADSL-modem. It will probably be dog slow. Sorry for that.

    Read the article

  • Know a little of a lot or a lot of a little? [closed]

    - by Jeff V
    Possible Duplicate: Is it better to specialize in a single field I like, or expand into other fields to broaden my horizons? My buddy and I who have been programming for 13 years or so were talking this morning and a question that came up was is it better to know a little of a lot (i.e. web, desktop, VB.Net, C#, jQuery, PHP, Java etc.) or is it better to know a lot of a little (meaning expert in something). The context of this question is what makes someone a senior programmer? Is it someone that has been around the block a few times and has been in many different situations or one that is locked in to a specific technology that is super knowledgeable in that one technology? I see pro's and con's of both scenarios.. Just wondering what others thought.

    Read the article

  • HTG Explains: How Antivirus Software Works

    - by Chris Hoffman
    Antivirus programs are powerful pieces of software that are essential on Windows computers. If you’ve ever wondered how antivirus programs detect viruses, what they’re doing on your computer, and whether you need to perform regular system scans yourself, read on. An antivirus program is an essential part of a multi-layered security strategy – even if you’re a smart computer user, the constant stream of vulnerabilities for browsers, plug-ins, and the Windows operating system itself make antivirus protection important. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Referring EDMX file in Separate VS Project from T4 Template

    - by Paul Petrov
    In my project I needed to separate template generated entities, context in separate projects from the EDMX file. I’ve stumbled across this problem how to make template generator to find edmx file without hardcoding absolute path into the template. Using relative path directly (inputFile=@”..\ProjectFolder\DataModel.edmx”) generated error: Error      2              Running transformation: System.IO.DirectoryNotFoundException: Could not find a part of the path 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\ProjectFolder\DataModel.edmx' The code that worked well for me when placed in the beginning of the .tt file: … string rootPath = Host.ResolvePath(String.Empty); string relativePath = @"..\\ProjectDir\\DataModel.edmx"; string inputFile = Path.Combine(rootPath, relativePath); EdmItemCollection ItemCollection = loader.CreateEdmItemCollection(inputFile); …

    Read the article

  • 5 Tips and Tricks to Get the Most Out of Steam

    - by Chris Hoffman
    If you’re a PC gamer, there’s a good chance you’re familiar with Valve’s Steam and use it regularly. Steam includes a variety of cool features that you might not notice if you’re just using it to install and launch games. These tips will help you take advantage of an SSD for faster game loading times, browse the web from within a game, download games remotely, create backup copies of your games, and use strong security features. HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • An alternative way to request read reciepts

    - by lavanyadeepak
    An alternative way to request read reciepts Sometime or other we use messaging namespaces like System.Net.Mail or System.Web.Mail to send emails from our applications. When we would need to include headers to request delivery or return reciepts (often called as Message Disposition Notifications) we lock ourselves to the limitation that not all email servers/email clients can satisfy this. We can enhance this border a little now, thanks to a new innovation I discovered from Gawab. It embeds a small invisible image of 1x1 dimension and the image source reads as recieptimg.php?id=2323425324. When this image is requested by the web browser or email client, the serverside handler does a smart mapping based on the ID to indicate that the message was read. We call them as 'Web Bugs'. But wait it is not a fool proof solution since spammers misuse this technique to confirm activeness of an email address and most of the email clients suppress inline images for security reasons. I just thought anyway would share this observation for the benefit of others.

    Read the article

  • Why "Fork me on github"?

    - by NoBugs
    I understand how Github works, but one thing I've been confused about is, why almost every OSS project lately has a "Fork me on Github" link on their homepage. For example, http://jqtjs.com/, http://www.daviddurman.com/flexi-color-picker/, and others. Why is this so common? Is it that they want/need code validation, checking for security/performance improvements that they may not know how to do? Is it meant to show that this is a collaborative project - you're welcome to add improvements? Do they work for Github, or want to promote their service? Oddly enough, I don't think I've seen a "Fork project on Bitbucket" logo recently. My first reaction to that logo was that the project probably needs to be modified (forked) in order to integrate it with anything useful - or that they are encouraging fragmented codebase, encouraging everyone to make their own fork of the project. But I don't think that is the intent.

    Read the article

  • Mount encrypted volumes from command line?

    - by cha
    If I have an encrypted external disk (or an internal disk that is not in fstab), I see an entry for it in Nautilus -- with an entry like "X GB Encrypted Volume". I can click on this volume, and am prompted for a password to decrypt and mount the device. But how do I do this from the command line? This wiki page, and other docs I can find, only refer to GUI methods of decrypting the device; but this won't do in the context of headless servers or SSH logins. Is there a simple way to get devices to mount to automatic locations in "/media" just like they would with the GUI? (I'm not asking about encrypted home directories -- I'm aware of ecryptfs-mount-private. This question is about additional encrypted volumes.)

    Read the article

  • Public Cloud, co-location and managed services ... what is the cloud?

    - by llaszews
    Recently I have had conversation with a number of people that are selling and implementing 'cloud' solutions. I put cloud in quotes as implementations like co-location (aka co-lo) and managed services (sometimes referred to as 'your mess for less') have become popular options for companies moving to the cloud. These are obviously not pure public cloud offerings and probably more of hybrid cloud implementations as the infrastructure (PasS and IaaS)is dedicated to a specific customer. This eliminates the security, multi-tenancy, performance and other concerns that companies have regarding public cloud. Are co-location and managed services cloud to you? Are they something your company is considering when you think about cloud ?

    Read the article

  • Referencing external javascript vs. hosting my own copy

    - by Mr. Jefferson
    Say I have a web app that uses jQuery. Is it better practice to host the necessary javascript files on my own servers along with my website files, or to reference them on jQuery's CDN (example: http://code.jquery.com/jquery-1.7.1.min.js)? I can see pros for both sides: If it's on my servers, that's one less external dependency; if jQuery went down or changed their hosting structure or something like that, then my app breaks. But I feel like that won't happen often; there must be lots of small-time sites doing this, and the jQuery team will want to avoid breaking them. If it's on my servers, that's one less external reference that someone could call a security issue If it's referenced externally, then I don't have to worry about the bandwidth to serve the files (though I know it's not that much). If it's referenced externally and I'm deploying this web site to lots of servers that need to have their own copies of all the files, then it's one less file I have to remember to copy/update.

    Read the article

  • What are the so-called "levels" of understanding multithreading?

    - by Dan Tao
    I seem to remember reading somewhere some list of 4 "levels" of understanding multithreading. This may have been in a formal publication, or it may have been in an extremely informal context (even like in a Stack Overflow question, for example). Unfortunately I don't remember who referred to them or precisely what they were. I seem to recall that they were roughly like: Total ignorance Awareness mixed with incompetence Relative competence mixed with fear True understanding My intention is to refer to these levels in a blog post I'm writing, with a reference; but I can't for the life of me remember where I first encountered this list. Brief Google searches have proved unfruitful.

    Read the article

  • How To Check If Your Account Passwords Have Been Leaked Online and Protect Yourself From Future Leaks

    - by Chris Hoffman
    Security breaches and password leaks happen constantly on today’s Internet. LinkedIn, Yahoo, Last.fm, eHarmony – the list of compromised websites is long. If you want to know whether your account information was leaked, there are some tools you can use. These leaks often lead to many compromised accounts on other websites. However, you can protect yourself by using unique passwords everywhere – if you do, password leaks won’t be a threat to you. Image Credit: Johan Larsson on Flickr 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • forward sudo verification

    - by Timo Kluck
    I often use the following construct for building and installing a tarball: sudo -v && make && sudo make install which will allow me to enter my password immediately and have everything done unattended. This works well except in the rare case that building takes longer than the sudo timeout, which may happen on my rather slow machine with large projects (even when using make -j4). But when the build takes a long time, that's exactly when doing things unattended has a great advantage. Can anyone think of a shell construct that allows me to input my password immediately, and which has make executing under normal permissions and make install under elevated permissions? For security reasons, I don't want to configure my user to use sudo without password. A viable option is to set the timeout to very long, but I'm hoping for something more elegant.

    Read the article

  • Do you use to third party companies to review your company's code?

    - by CodeToGlory
    I am looking to get the following - Basic code review to make sure they follow the guidelines imposed. Security code analysis to make sure there are no loopholes. No performance bottlenecks by doing a load test etc. We have lot of code coming in from third parties and is becoming laborious to manage code reviews and hence looking to see if others employ such practices. I understand that it may be a concern for some and would raise the question "Well, who is going to make sure the agency is doing their job right?" But basically I am just looking for a third party who can hold all vendor code to the same standards.

    Read the article

  • Synonyms for different languages in LibreOffice Writer? [closed]

    - by cipricus
    Possible Duplicate: How do I add English-UK thesaurus in LibreOffice? When setting a text for English US the context menu contains 'Synonyms' This is not the case for the other languages for which I have the spelling installed (English UK, French etc) Can I have the Synonyms option for UK English too, for example? Even that is a problem although there seems to be solutions around for it. Here, for example, (which is a link to here) but after testing it cannot see synonyms for uk. Also this which was reported as a solution is not working anymore it seems. What about other languages? (Please notice that this question is not just about English-UK. I have initially noticed that 'synonyms' where missing in relation to British, but I am asking about how to solve the issue in general or, when there is no general solution, how to solve it on a case-by-case basis. For the meta consequence of all this, involving the issue of it being a duplicate, see comments under here and this question.)

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Is there a way to use a SSH connection to access SMB or UPnP files without setting up a VPN?

    - by Michael Chapman
    What I'm trying to do is set up a SSH key that only gives access to certain directories, for security reasons I don't want it to have full access to my SSH server. I already have the ability to access the directories I need over my local network (right now using SMB, although I also used UPnP for awhile). I need, however, to be able to access those files securely over the internet from both Ubuntu and Windows machines. I'm somewhat new to SSH and not sure what the best approach to solving my problem is. If anyone knows how I can do this or where I can find a detailed tutorial I'd be grateful. And as always if anything is confusing or if there are any comments or corrections please let me know.

    Read the article

  • Why my site is not ranking for particular keyword

    - by user543087
    My site is only 3 days to be 6 months old. This website is unique, that is there is no competitor to this type site in India, providing comparison of payment gateways in India, besides the payment gateways companies itself. I've optimized it for key word : "payment gateway" I've changed the url's twice, latest being 3 months back, in which case Google Webmaster gave plently of 404's. I corrected the useful 404's and left meaningless ones as it is. What is the reason it's not ranking well for payment gateways? Even site with single page about "Payment gateways" seem to be ranking better than this. Is it does to: 1) Lot of outbound links to in-context companies and information 2) 404's as reported in Google Webmaster My another site is successfully getting 1500 unique visitors daily and is up in Google ranking. I don't know why it is not!

    Read the article

  • RabbitVCS suddenly stopped working in Nautilus with Ubuntu 11.04

    - by Sander
    A while ago I installed RabbitVCS on Ubuntu 11.04. It then all worked pretty well, but since a few weeks (maybe even more than a month) RabbitVCS suddenly disappeared from the Nautilus context menu. I visited this page: http://wiki.rabbitvcs.org/wiki/support/known-issues and saw some points I could try, but none of them worked out to a working version again. Also this issue Rabbit VCS stopped working after upgrade to 11.10 does not describe the solution for me, so I think it might be something else. I have also tried to reinstall RabbitVCS again from the PPA which was recently updated according to this topic, but no luck. I am still on 11.04 (as I don't like the way Ubuntu is going in newer versions at all) and my Nautilus version is 2.32.2.1 . Is there someone who can help with this one?

    Read the article

  • Rules for Naming

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Naming Documents (or is it “Document, Naming”?) Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What's Montague? It is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What's in a name? That which we call a rose By any other name would smell as sweet; So Romeo would, were he not Romeo call'd, Retain that dear perfection which he owes Without that title. Romeo, doff thy name And for that name which is no part of thee Take all myself.  Shakespeare – Romeo and Juliet Act II, Scene 2 We normally only use the bold portion of the famous Shakespearean quote above, but it is really out of context. As the play unfolds, we learn that a name is all too powerful. Indeed it is because of their names that the doomed lovers die. There might be life and death in a name (BTW, when I wrote this monogram, I was in Hatfield, PA. Remember the Hatfields and the McCoys?) This is a bit extreme, but in the field of Knowledge Management (KM) names are of the utmost importance as well. When I write an article about managing SharePoint sites, how should I name it? “Managing a site” or “Site, managing”? Nine times out of ten I’d opt for the latter. Almost everything we do is “Managing” so to make life easier for a person looking for meaningful content, we title our articles starting with the differentiator rather than the common factor. As a rule of thumb, we start the name with the noun rather than the verb. It is not what we do that is the primary key; it is what we do it to. So, answer this – is it a “rule of thumb” or a “thumb rule?” This is tough. A lot of what we do when naming is a judgment call. Both thumb and rule are nouns, albeit concrete and abstract (more about this later), but to most people “thumb rule” is meaningless while “rule of thumb” is an idiom. The difference between knowledge and information is that knowledge is meaningful information placed in context. Thus I elect the “rule of thumb”. It is the more meaningful title. Abstract and Concrete are relative terms. Many nouns (and verbs) that are abstract to a commoner, are concrete to a practitioner of one profession or another and may even have different concrete meanings in different professional jargons. Think about “running”. To an executive it means running a business, to a marathoner its meaning is much more literal. Generally speaking, we store and disseminate knowledge within a practice more than we do it in general. Even dictionaries encyclopedias define terms as they apply to different audiences. The rule of thumb is to put the more concrete first, but within the audience’s jargon. Even the title of this monogram is a question. Do I name it “Naming Documents” or “Documents, Naming”? Well, my own rule of thumb (“Here he goes again!?”) states that the latter is better because it starts with a noun, but this is a document about naming more than it about documents. The rules of naming also apply to graphs and charts, excel spreadsheets, and so on. Thus, I vote for the former.  A better title could have been “Naming Objects” only the word “Object” is a bit too abstract. How about just “Naming” or “Naming, rules of”? You get the drift. One of the ways to resolve all of this is to store the documents in Knowledge-Bases, which may become the subjects of a future punditry. Knowledge bases use keywords to describe their content.  Use a Metadata store for the keywords to at least attempt some common grounds. Here is another general rule (rule of thumb?!!) – put at least the one keyword in the title. Use subtitles. Here is an example: Migrating documents – Screening, cleaning, and organizing our knowledge. The main keyword is “documents”, next is “migrating”, other keywords also appear in the subtitle. They are “screening”, “cleaning”, and “organizing”. Any questions? Send me an amply named document by email: [email protected]

    Read the article

  • Set secondary receiver in PayPal Chained Payment after the initial transaction

    - by CJxD
    I'm running a service whereby customers seek the services of 'freelancers' through our web platform. The customer will make a 'bid' which is immediately taken from their accounts as security. Once the job is completed, the customer marks it as accepted and the bid gets distributed to the freelancer(s) as a reward. After initially storing these rewards in the accounts of the freelancers and relying on MassPay to sort out paying them later, I realised that your business needs to be turning over at least £5000/month before MassPay is switched on. Instead, I was referred to Delayed Chained Payments in PayPal's Adaptive Payments API. This allows the customer to pay the primary receiver (my business) before the payment is later triggered to be sent to the secondary receivers (the freelancers). However, at the time that the customer initiates this transaction, you must understand that nobody yet knows who will receive the reward. So, before I program this whole Adaptive Payments system, is it even possible to change or add the secondary receivers after the customer has paid? If not, what can I do?

    Read the article

  • Is hierarchical product backlog a good idea in TFS 2012-2013?

    - by Matías Fidemraizer
    I'd like to validate I'm not in the wrong way. My team project is using Visual Studio Scrum 2.x. Since each area/product has a lot of kind of requirements (security, user interface, HTTP/REST services...), I tried to manage this creating "parent backlogs" which are "open forever" and they contain generic requirements. Those parent backlogs have other "open forever" backlogs, and/or sprint backlogs. For example: HTTP/REST Services (forever) ___ Profiles API (forever) ________ POST profile (forever) _______________ We need a basic HTTP/REST profiles' API to register new user profiles (sprint backlog) Is it the right way of organizing the product backlog? Note: I know there're different points of view and that would be right for some and wrong for others. I'm looking for validation about if this is a possible good practice on TFS with Visual Studio Scrum.

    Read the article

  • initial Class design: access modifiers and no-arg constructors

    - by yas
    Context: Student working through Class design in personal/side project for Summer. I've never written anything implemented by others or had to maintain code. Trying to maximize encapsulation and imagining what would make code easy to maintain. Concept: Tight/Loose Class design where Tight and Loose refer to access modifiers and constructors. Tight: initially, everything, including setters, is private and a no-arg constructor is not provided (only a full constructor). Loose: not Tight Exceptions: the obvious like toString Reasoning: If code, at the very beginning, is tight, then it should be guaranteed that changes, with respect to access/creation, should never damage existing implementations. The loosening of code happens incrementally and must be thought through, justified, and safe (validated). Benefit: Existing implementing code should not break if changes are made later. Cost: Takes more time to create. Since this is my own thinking, I hope to get feedback as to whether I should push to work this way. Good idea or bad idea?

    Read the article

  • Multiple vulnerabilities in Thunderbird

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2011-2372 Permissions, Privileges, and Access Controls vulnerability 3.5 Thunderbird Solaris 11 11/11 SRU 2 Solaris 10 Contact Support CVE-2011-2995 Denial Of Service (DoS) vulnerability 10.0 CVE-2011-2997 Denial Of Service (DoS) vulnerability 10.0 CVE-2011-2998 Denial Of Service (DoS) vulnerability 10.0 CVE-2011-2999 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2011-3000 Improper Control of Generation of Code ('Code Injection') vulnerability 4.3 CVE-2011-3001 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2011-3005 Denial Of Service (DoS) vulnerability 9.3 CVE-2011-3232 Improper Control of Generation of Code ('Code Injection') vulnerability 9.3 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >