Search Results

Search found 1627 results on 66 pages for 'scenarios'.

Page 14/66 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Behavior-Driven Development / Use case diagram

    - by Mik378
    Regarding growing of Behavior-Driven Development imposing acceptance testing, are use cases diagram useful or do they lead to an "over-documentation"? Indeed, acceptance tests representing specifications by example, as use cases promote despite of a more generic manner (since cases, not scenarios), aren't they too similar to treat them both at the time of a newly created project? From this link, one opinion is: Another realization I had is that if you do UseCases and automated AcceptanceTests you are essentially doubling your work. There is duplication between the UseCases and the AcceptanceTests. I think there is a good case to be made that UserStories + AcceptanceTests are more efficient way to work when compared to UseCases + AcceptanceTests. What to think about?

    Read the article

  • A deadlock was detected while trying to lock variables in SSIS

    Error: 0xC001405C at SQL Log Status: A deadlock was detected while trying to lock variables "User::RowCount" for read/write access. A lock cannot be acquired after 16 attempts. The locks timed out. Have you ever considered variable locking when building your SSIS packages? I expect many people haven’t just because most of the time you never see an error like the one above. I’ll try and explain a few key concepts about variable locking and hopefully you never will see that error. First of all, what is all this variable locking all about? Put simply SSIS variables have to be locked before they can be accessed, and then of course unlocked once you have finished with them. This is baked into SSIS, presumably to reduce the risk of race conditions, but with that comes some additional overhead in that you need to be careful to avoid lock conflicts in some scenarios. The most obvious place you will come across any hint of locking (no pun intended) is the Script Task or Script Component with their ReadOnlyVariables and ReadWriteVariables properties. These two properties allow you to enter lists of variables to be used within the task, or to put it another way, these lists of variables to be locked, so that they are available within the task. During the task pre-execute phase the variables and locked, you then use them during the execute phase when you code is run, and then unlocked for you during the post-execute phase. So by entering the variable names in one of the two list, the locking is taken care of for you, and you just read and write to the Dts.Variables collection that is exposed in the task for the purpose. As you can see in the image above, the variable PackageInt is specified, which means when I write the code inside that task I don’t have to worry about locking at all, as shown below. public void Main() { // Set the variable value to something new Dts.Variables["PackageInt"].Value = 199; // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } As you can see as well as accessing the variable, hassle free, I also raise an event. Now consider a scenario where I have an event hander as well as shown below. Now what if my event handler uses tries to use the same variable as well? Well obviously for the point of this post, it fails with the error quoted previously. The reason why is clearly illustrated if you consider the following sequence of events. Package execution starts Script Task in Control Flow starts Script Task in Control Flow locks the PackageInt variable as specified in the ReadWriteVariables property Script Task in Control Flow executes script, and the On Information event is raised The On Information event handler starts Script Task in On Information event handler starts Script Task in On Information event handler attempts to lock the PackageInt variable (for either read or write it doesn’t matter), but will fail because the variable is already locked. The problem is caused by the event handler task trying to use a variable that is already locked by the task in Control Flow. Events are always raised synchronously, therefore the task in Control Flow that is raising the event will not regain control until the event handler has completed, so we really do have un-resolvable locking conflict, better known as a deadlock. In this scenario we can easily resolve the problem by managing the variable locking explicitly in code, so no need to specify anything for the ReadOnlyVariables and ReadWriteVariables properties. public void Main() { // Set the variable value to something new, with explicit lock control Variables lockedVariables = null; Dts.VariableDispenser.LockOneForWrite("PackageInt", ref lockedVariables); lockedVariables["PackageInt"].Value = 199; lockedVariables.Unlock(); // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } Now the package will execute successfully because the variable lock has already been released by the time the event is raised, so no conflict occurs. For those of you with a SQL Engine background this should all sound strangely familiar, and boils down to getting in and out as fast as you can to reduce the risk of lock contention, be that SQL pages or SSIS variables. Unfortunately we cannot always manage the locking ourselves. The Execute SQL Task is very often used in conjunction with variables, either to pass in parameter values or get results out. Either way the task will manage the locking for you, and will fail when it cannot lock the variables it requires. The scenario outlined above is clear cut deadlock scenario, both parties are waiting on each other, so it is un-resolvable. The mechanism used within SSIS isn’t actually that clever, and whilst the message says it is a deadlock, it really just means it tried a few times, and then gave up. The last part of the error message is actually the most accurate in terms of the failure, A lock cannot be acquired after 16 attempts. The locks timed out.  Now this may come across as a recommendation to always manage locking manually in the Script Task or Script Component yourself, but I think that would be an overreaction. It is more of a reminder to be aware that in high concurrency scenarios, especially when sharing variables across multiple objects, locking is important design consideration. Update – Make sure you don’t try and use explicit locking as well as leaving the variable names in the ReadOnlyVariables and ReadWriteVariables lock lists otherwise you’ll get the deadlock error, you cannot lock a variable twice!

    Read the article

  • With a small development team, how do you organize second-level support?

    - by Lenny222
    Say, you have a team of 5 developers and your inhouse customers demand a reasonable support availability of say 5 days a week, 9am-6pm. I can imagine the following scenarios: the customers approach the same guy, every time. Downside: single point of failure, if the guy is unavailable. each developer is assigned one week of support duty. Downside: how to you distribute the work evenly in times of planned (vacation) and unplanned (sickness) unavailability? each developer is assigned one day of support duty. Downside: similar to above, but not as bad. a randomly picked developer handles the support request. Downside: maybe not fair, see above. What is your experience?

    Read the article

  • IRM Item Codes &ndash; what are they for?

    - by martin.abrahams
    A number of colleagues have been asking about IRM item codes recently – what are they for, when are they useful, how can you control them to meet some customer requirements? This is quite a big topic, but this article provides a few answers. An item code is part of the metadata of every sealed document – unless you define a custom metadata model. The item code is defined when a file is sealed, and usually defaults to a timestamp/filename combination. This time/name combo tends to make item codes unique for each new document, but actually item codes are not necessarily unique, as will become clear shortly. In most scenarios, item codes are not relevant to the evaluation of a user’s rights - the context name is the critical piece of metadata, as a user typically has a role that grants access to an entire classification of information regardless of item code. This is key to the simplicity and manageability of the Oracle IRM solution. Item codes are occasionally exposed to users in the UI, but most users probably never notice and never care. Nevertheless, here is one example of where you can see an item code – when you hover the mouse pointer over a sealed file. As you see, the item code for this freshly created file combines a timestamp with the file name. But what are item codes for? The first benefit of item codes is that they enable you to manage exceptions to the policy defined for a context. Thus, I might have access to all oracle – internal files - except for 2011_03_11 13:33:29 Board Minutes.sdocx. This simple mechanism enables Oracle IRM to provide file-by-file control where appropriate, whilst offering the scalability and manageability of classification-based control for the majority of users and content. You really don’t want to be managing each file individually, but never say never. Item codes can also be used for the opposite effect – to include a file in a user’s rights when their role would ordinarily deny access. So, you can assign a role that allows access only to specified item codes. For example, my role might say that I have access to precisely one file – the one shown above. So how are item codes set? In the vast majority of scenarios, item codes are set automatically as part of the sealing process. The sealing API uses the timestamp and filename as shown, and the user need not even realise that this has happened. This automatically creates item codes that are for all practical purposes unique - and that are also intelligible to users who might want to refer to them when viewing or assigning rights in the management UI. It is also possible for suitably authorised users and applications to set the item code manually or programmatically if required. Setting the item code manually using the IRM Desktop The manual process is a simple extension of the sealing task. An authorised user can select the Advanced… sealing option, and will see a dialog that offers the option to specify the item code. To see this option, the user’s role needs the Set Item Code right – you don’t want most users to give any thought at all to item codes, so by default the option is hidden. Setting the item code programmatically A more common scenario is that an application controls the item code programmatically. For example, a document management system that seals documents as part of a workflow might set the item code to match the document’s unique identifier in its repository. This offers the option to tie IRM rights evaluation directly to the security model defined in the document management system. Again, the sealing application needs to be authorised to Set Item Code. The Payslip Scenario To give a concrete example of how item codes might be used in a real world scenario, consider a Human Resources workflow such as a payslips. The goal might be to allow the HR team to have access to all payslips, but each employee to have access only to their own payslips. To enable this, you might have an IRM classification called Payslips. The HR team have a role in the normal way that allows access to all payslips. However, each employee would have an Item Reader role that only allows them to access files that have a particular item code – and that item code might match the employee’s payroll number. So, employee number 123123123 would have access to items with that code. This shows why item codes are not necessarily unique – you can deliberately set the same code on many files for ease of administration. The employees might have the right to unseal or print their payslip, so the solution acts as a secure delivery mechanism that allows payslips to be distributed via corporate email without any fear that they might be accessed by IT administrators, or forwarded accidentally to anyone other than the intended recipient. All that remains is to ensure that as each user’s payslip is sealed, it is assigned the correct item code – something that is easily managed by a simple IRM sealing application. Each month, an employee’s payslip is sealed with the same item code, so you do not need to keep amending the list of items that the user has access to – they have access to all documents that carry their employee code.

    Read the article

  • Do real developers use UML and other CASE tools?

    - by Avi
    I'm a CS student, currently a junior, and in one of my classes this semester they have us studying all sorts of UML diagramming methods. Among others, we've touched on Petri nets, DFD diagrams, sequence diagrams, use case diagrams, collaboration diagrams, Jackson System Development diagrams, entity-relation diagrams, and more. I've worked on more than a few professional projects over the years and never encountered anyone who used these systems to any great degree (other than a general class diagram or a diagram of the tables in a database). I was just wondering if I could query the hive mind to see if this is true in your work experience too. Have you used these models at all and found them to be as important as they tell us students they are? Or is all this stuff just academic ivory-tower crap that people in the real world hardly ever touch? Which of these systems have you found to be effective and useful? Are there specific kinds of scenarios that they are more intended to be used in than what the typical software developer encounters?

    Read the article

  • ASP.NET: Using pickup directory for outgoing e-mails

    - by DigiMortal
    Sending e-mails out from web applications is very common task. When we are working on or test our systems with real e-mail addresses we don’t want recipients to receive e-mails (specially if we are using some subset of real data9. In this posting I will show you how to make ASP.NET SMTP client to write e-mails to disc instead of sending them out. SMTP settings for web application I have seen many times the code where all SMTP information is kept in app settings just to read them in code and give to SMTP client. It is not necessary because we can define all these settings under system.web => mailsettings node. If you are using web.config to keep SMTP settings then all you have to do in your code is just to create SmtpClient with empty constructor. var smtpClient = new SmtpClient(); Empty constructor means that all settings are read from web.config file. What is pickup directory? If you want drastically raise e-mail throughput of your SMTP server then it is not very wise plan to communicate with it using SMTP protocol. it adds only additional overhead to your network and SMTP server. Okay, clients make connections, send messages out and it is also overhead we can avoid. If clients write their e-mails to some folder that SMTP server can access then SMTP server has e-mail forwarding as only resource-eager task to do. File operations are way faster than communication over SMTP protocol. The directory where clients write their e-mails as files is called pickup directory. By example, Exchange server has support for pickup directories. And as there are applications with a lot of users who want e-mail notifications then .NET SMTP client supports writing e-mails to pickup directory instead of sending them out. How to configure ASP.NET SMTP to use pickup directory? Let’s say, it is more than easy. It is very easy. This is all you need. <system.net>   <mailSettings>     <smtp deliveryMethod="SpecifiedPickupDirectory">       <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\maildrop\"/>     </smtp>   </mailSettings> </system.net> Now make sure you don’t miss come points: Pickup directory must physically exist because it is not created automatically. IIS (or Cassini) must have write permissions to pickup directory. Go through your code and look for hardcoded SMTP settings. Also take a look at all places in your code where you send out e-mails that there are not some custom settings used for SMTP! Also don’t forget that your mails will be written now to pickup directory and they are not sent out to recipients anymore. Advanced scenario: configuring SMTP client in code In some advanced scenarios you may need to support multiple SMTP servers. If configuration is dynamic or it is not kept in web.config you need to initialize your SmtpClient in code. This is all you need to do. var smtpClient = new SmtpClient(); smtpClient.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory; smtpClient.PickupDirectoryLocation = pickupFolder; Easy, isn’t it? i like when advanced scenarios end up with simple and elegant solutions but not with rocket science. Note for IIS SMTP service SMTP service of IIS is also able to use pickup directory. If you have set up IIS with SMTP service you can configure your ASP.NET application to use IIS pickup folder. In this case you have to use the following setting for delivery method. SmtpDeliveryMethod.PickupDirectoryFromIis You can set this setting also in web.config file. <system.net>   <mailSettings>     <smtp deliveryMethod="PickupDirectoryFromIis" />   </mailSettings> </system.net> Conclusion Who was still using different methods to avoid sending e-mails out in development or testing environment can now remove all the bad code from application and live on mail settings of ASP.NET. It is easy to configure and you have less code to support e-mails when you use built-in e-mail features wisely.

    Read the article

  • is gnome 3 is slow or my machine?

    - by user103054
    I have Dell XPS 15 i7 - 2360 with nvidia 525M (1GB) with optimus running Ubuntu 12.10 64 bit and have bumblebee installed for optimus. Recently I installed Gnome Shell to give it a try and found it sort of lagging under few scenarios though rest all is butter smooth. When I click on Date gnome 3 it appears as if a second has elapsed before the window is drawn. But others panel items like network or volume looks fine. In gnome 3 when I'm on dash with (window preview) if I remove any of the windows before the remaining windows reshuffle it takes about 2 seconds (noticeable clearly). Everything else is fine in both unity and Gnome Shell. What is causing this slowness? Is Gnome Shell is really slower?

    Read the article

  • WCF Keep Alive: Whether to disable keepAliveEnabled

    - by Lijo
    I have a WCF web service hosted in a load balanced environment. I do not need any WCF session related functionality in the service. QUESTION What are the scenarios in which performances will be best if keepAliveEnabled = false keepAliveEnabled = true Reference From Load Balancing By default, the BasicHttpBinding sends a connection HTTP header in messages with a Keep-Alive value, which enables clients to establish persistent connections to the services that support them. This configuration offers enhanced throughput because previously established connections can be reused to send subsequent messages to the same server. However, connection reuse may cause clients to become strongly associated to a specific server within the load-balanced farm, which reduces the effectiveness of round-robin load balancing. If this behavior is undesirable, HTTP Keep-Alive can be disabled on the server using the KeepAliveEnabled property with a CustomBinding or user-defined Binding.

    Read the article

  • How can I test if my rotated rectangle intersects a corner?

    - by Raven Dreamer
    I have a square, tile-based collision map. To check if one of my (square) entities is colliding, I get the vertices of the 4 corners, and test those 4 points against my collision map. If none of those points are intersecting, I know I'm good to move to the new position. I'd like to allow entities to rotate. I can still calculate the 4 corners of the square, but once you factor in rotation, those 4 corners alone don't seem to be enough information to determine if the entity is trying to move to a valid state. For example: In the picture below, black is unwalkable terrain, and red is the player's hitbox. The left scenario is allowed because the 4 corners of the red square are not in the black terrain. The right scenario would also be (incorrectly) allowed, because the player, cleverly turned at a 45* angle, has its corners in valid spaces, even if it is (quite literally) cutting the corner. How can I detect scenarios similar to the situation on the right?

    Read the article

  • OT: March Mdness 2011

    - by RickHeiges
    This past fall, I decided to take a break from Fantasy Football. Did I miss it? Yes to some extent. Fantasy Football can really eat up a lot of time. But - I still love March Madness (NCAA Men's Basketball Tourney). It doesn't take much time to pick out teams. Since you can't make any changes after the deadline and the computer keeps track of scoring/scenarios/etc, it is a fun thing that really takes a little time and can help you enjoy the games a bit more. Let's see how good you are at picking...(read more)

    Read the article

  • What is the most effective way to add functionality to unfamiliar, structurally unsound code?

    - by Coder
    This is probably something everyone has to face during the development sooner or later. You have an existing code written by someone else, and you have to extend it to work under new requirements. Sometimes it's simple, but sometimes the modules have medium to high coupling and medium to low cohesion, so the moment you start touching anything, everything breaks. And you don't feel that it's fixed correctly when you get the new and old scenarios working again. One approach would be to write tests, but in reality, in all cases I've seen, that was pretty much impossible (reliance on GUI, missing specifications, threading, complex dependencies and hierarchies, deadlines, etc). So everything sort of falls back to good ol' cowboy coding approach. But I refuse to believe there is no other systematic way that would make everything easier. Does anyone know a better approach, or the name of the methodology that should be used in such cases?

    Read the article

  • Can I run MS Office apps installed under windows with Ubuntu

    - by Richard
    I don't have the option of installing the MS Office apps under Wine mostly as I simply don't have them, but these apps do exist on the workstation I use at work. I have installed Ubuntu on this machine on the same partition as MS Windows via the run-Ubuntu-as-a-Windows-app (not quite verbatim) installation instructions. The MS Windows is XP Professional and the MS Office version is 2007. Perhaps there are two scenarios, one where I can simply use the apps where they sit, and another where I can somehow "install" the existing executables into Ubuntu (Wine?) rather than installing their iso's (or whatever), which, again, i don't have. Anyway, whatever you can tell me about this is good with me. Thank you so much.

    Read the article

  • Is there really Object-relational impedance mismatch?

    - by user52763
    It is always stated that it is hard to store applications objects in relational databases - the object-relational impedance mismatch - and that is why Document databases are better. However, is there really an impedance mismatch? And object has a key (albeit it may be hidden away by the runtime as a pointer to memory), a set of values, and foreign keys to other objects. Objects are as much made up of tables as it is a document. Neither really fit. I can see a use for databases to model the data into specific shapes for scenarios in the application - e.g. to speed up database lookup and avoid joins, etc., but won't it be better to keep the data as normalized as possible at the core, and transform as required?

    Read the article

  • SPD 2013 Missing design view, is it really such a big deal?

    - by Sahil Malik
    SharePoint 2010 Training: more information First, please read what my friend Marc Anderson has to say about it. More, and More and even more. Also, my friend Asif Rehmani has some views on it as well. They bring up some important points, but in short, everything that allows for Visual Editing of pages, is gone! And anything that concerned visual editing, (and there are many such scenarios), is now gone. What is the practical upshot of all this? Read full article ....

    Read the article

  • DCOGS Balance Breakup Diagnostic in OPM Financials

    - by ChristineS-Oracle
    Purpose of this diagnostic (OPMDCOGSDiag.sql) is to identify the sales orders which constitute the Deferred COGS account balance.This will help to get the detailed transaction information for Sales Order/s Order Management, Account Receivables, Inventory and OPM financials sub ledger at the Organization level.  This script is applicable for various scenarios of Standard Sales Order, Return Orders (RMA) coupled with all the applicable OPM costing methods like Standard, Actual and Lot costing.  OBJECTIVE: The sales order(s) which are at different stages of their life cycle in one spreadsheet at one go. To collect the information of: This will help in: Lesser time for data collection. Faster diagnosis of the issue. Easy collaboration across different modules like  Order Management, Accounts Receivables, Inventory and Cost Management.  You can download the script from Doc ID 1617599.1 DCOGS Balance Breakup (SO/RMA) and Diagnostic Analyzer in OPM Financials.

    Read the article

  • What counts as reinventing the wheel?

    - by dsimcha
    Do the following scenarios count as "reinventing the wheel" in your book? A solution exists, but not in the language you want to use, and existing solutions can't be interfaced with the language you want to use in a clean, idiomatic way. In principle you could get an existing library to do what you wanted with heavy modification, but you think it would probably be easier to just start from scratch. What you're writing has the same one-line description as stuff that's already been done, but you're targeting a different niche. For example, maybe your problem has been solved a zillion times before, but in a way that's inefficient for large datasets and your code works well for large datasets.

    Read the article

  • How are Java ByteBuffer's limit and position variable's updated?

    - by Dummy Derp
    There are two scenarios: writing and reading Writing: Whenever I write something to the ByteBuffer by calling its put(byte[]) method the position variable is incremented as: current position + size of byte[] and limit stays at the max. If, however, I put the data in a view buffer then I will have to, manually, calculate and update the position Before I call the write(ByteBuffer) method of the channel to write something, I will have to flip() the Bytebuffer so that position points to zero and limit points to the last byte that was written to the ByteBuffer. Reading: Whenever I call the read(ByteBuffer) method of a channel to read something, the position variable stays at 0 and the limit variable of the ByteBuffer points to the last byte that was read. So, if the ByteBuffer is smaller than the file being read, the limit variable is pushed to max This means that the ByteBuffer is already flipped and I can proceed to extracting the values from the ByteBuffer. Please, correct me where I am wrong :)

    Read the article

  • Webcast: Introduction To Causal Factors

    - by ChristineS-Oracle
    Webcast: Introduction To Causal Factors Date: June 11, 2014 at 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 am PT, 8:30 pm, India Time (Mumbai, GMT+05:30) This one hour advisor webcast will provide an introduction to causal factors for Demand Management and AFDM. Pre-seeded causal factors will be discussed as well as when they are not appropriate. Scenarios of when to add causal factors will be covered and best practice method of adding and using. Topics will include: Causal factors in DM and AFDM Pre-seeded causal factors When to modify causal factor settings Best practice when working with causal factors Details & Registration: Doc ID 1664606.1

    Read the article

  • I want a trivial example of where MongoDB can scale but a relational database will have trouble

    - by Ryan Weir
    I'm just learning to use MongoDB, and when discussing with other programmers would like a quick example of why NoSQL can be a good choice compared to a traditional RDBMS - however the scenarios I come up with and can find online seem pretty contrived. E.g. a blog with lots of traffic could be represented relationally, but will require some performance tuning and joins across tables (assuming full denormalization is being used). Whereas MongoDB would allow direct retrieval from one collection to the same effect. But the response I'm getting from other programmers is "why not just keep it relational and then add some trivial caching later?" Does anybody have a less contrived example where MongoDB will really shine and a relational db will fall over much quicker? The smaller the project/system the better, because it leaves less room for disagreement. Something along the lines of the complexity of the blog example would be really useful. Thanks.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Ripping DVD to iso - Accurately

    - by johnnyturbo3
    I have been using Brasero to rip my DVD collection to .iso. However, I've discovered some errors in some of the DVDs through playback e.g. VLC player would just stop playing the iso file when a bad section in playback is met (half-way through a film). The worst thing is that no errors or warnings were thrown during the ripping process - I could have . Is there a method or application that will monitor DVD/file data integrity and avoid such scenarios in the future? Anything equivalent to Exact Audio Copier or CDparanoia for DVDs?

    Read the article

  • RIA Services and Validation

    Earlier today, my SilverlightTV recording on RIA Services and Validation went online. I used validation as a feature area to focus on this first recording on RIA Services, because I think it illustrates both the RIA Services value proposition and key elements of the vision around the project in a very direct manner. Specifically: Focus on end-to-end solutions for data scenarios. It is not sufficient to just address querying data or submitting some changes, but about providing the infrastructure...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Testing Reference Data Mappings

    - by Michael Stephenson
    Background Mapping reference data is one of the common scenarios in BizTalk development and its usually a bit of a pain when you need to manage a lot of reference data whether it be through the BizTalk Cross Referencing features or some kind of custom solution. I have seen many cases where only a couple of the mapping conditions are ever tested. Approach As usual I like to see these things tested in isolation before you start using them in your BizTalk maps so you know your mapping functions are working as expected. This approach can be used for almost all of your reference data type mapping functions where you can take advantage of MSTests data driven tests to test lots of conditions without having to write millions of tests. Walk Through Rather than go into the details of this here, I'm going to call out to one of my colleagues who wrote a nice little walk through about using data driven tests a while back. Check out Callum's blog: http://callumhibbert.blogspot.com/2009/07/data-driven-tests-with-mstest.html

    Read the article

  • How to make a great functional specification

    - by sfrj
    I am going to start a little side project very soon, but this time i want to do not just the little UML domain model and case diagrams i often do before programming, i thought about making a full functional specification. Is there anybody that has experience writing functional specifications that could recommend me what i need to add to it? How would be the best way to start preparing it? Here i will write down the topics that i think are more relevant: Purpose Functional Overview Context Diagram Critical Project Success Factors Scope (In & Out) Assumptions Actors (Data Sources, System Actors) Use Case Diagram Process Flow Diagram Activity Diagram Security Requirements Performance Requirements Special Requirements Business Rules Domain Model (Data model) Flow Scenarios (Success, alternate…) Time Schedule (Task Management) Goals System Requirements Expected Expenses What do you think about those topics? Shall i add something else? or maybe remove something?

    Read the article

  • How do I cross-compile my application for Ubuntu 12.04 armhf architecture on a Ubuntu 12.04 i386 host?

    - by Jonathan Cave
    I have a large application I have written. I can successfully compile the application in the following scenarios: in a native compilation for the i386 host running Ubuntu 12.04 natively on a PandaBoard running Ubuntu 12.04 (this takes a long time) using Qemu and a chroot on the host PC for the armhf PandaBoard target (this takes a very long time) I would like to cross-compile the application on the i386 host to run on a target such as the PandaBoard to complete builds in a timely fashion. So far attempts made using the arm-linux-gnueabihf tool chain in the repositories has produced binaries that do not run correctly. At this stage, I have no plans to package the software. What is the recommended way to achieve a successful cross-compile?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >