Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 532/1018 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • BizTalk host throttling &ndash; Singleton pattern and High database size

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/06/30/biztalk-host-throttling-ndash-singleton-pattern-and-high-database-size.aspxI have worked for some days around the singleton pattern (for those unfamiliar with it, read this post by Victor Fehlberg) and have come across a few very interesting posts, among which one dealt with performance issues (here, also by Victor Fehlberg). Simply put: if you have an orchestration which implements the singleton pattern, then performances will continuously decrease as the orchestration receives and consumes messages, and that behavior is more obvious when the orchestration never ends (ie : it keeps looping and never terminates or completes). As I experienced the same kind of problem (actually I was alerted by SCOM, which told me that the host was being throttled because of High database size), I thought it would be a good idea to dig a little bit a see what happens deep inside BizTalk and thus understand the reasons for this behavior. NOTE: in this article, I will focus on this High database size throttling condition. I will try and work on the other conditions in some not too distant future… Test conditions The singleton orchestration For the purpose of this study, I have created the following orchestration, which is a very basic implementation of a singleton that piles up incoming messages, then does something else when a certain timeout has been reached without receiving another message: Throttling settings I have two distinct hosts : one that hosts the receive port (basic FILE port) : Ports_ReceiveHostone that hosts the orchestration : ProcessingHost In order to emphasize the throttling mechanism, I have modified the throttling settings for each of these hosts are as follows (all other parameters are set to the default value): [Throttling thresholds] Message count in database: 500 (default value : 50000) Evolution of performance counters when submitting messages Since we are investigating the High database size throttling condition, here are the performance counter that we should take a look at (all of them are in the BizTalk:Message Agent performance object): Database sizeHigh database sizeMessage delivery throttling stateMessage publishing throttling stateMessage delivery delay (ms)Message publishing delay (ms)Message delivery throttling state durationMessage publishing throttling state duration (If you are not used to Perfmon, I strongly recommend that you start using it right now: it is a wonderful tool that allows you to open the hood and see what is going on inside BizTalk – and other systems) Database size It is quite obvious that we will start by watching the database size and high database size counters, just to see when the first reaches the configured threshold (500) and when the second rings the alarm. NOTE : During this test I submitted 600 messages, one message at a time every 10ms to see the evolution of the counters we have previously selected. It might not show very well on this screenshot, but here is what happened: From 15:46:50 to 15:47:50, the database size for the Ports_ReceiveHost host (blue line) kept growing until it reached a maximum of 504.At 15:47:50, the high database size alert fires At first I was surprised by this result: why is it the database size of the receiving host that keeps growing since it is the processing host that piles up messages? Actually, it makes total sense. This counter measures the size of the database queue that is being filled by the host, not consumed. Therefore, the high database size alert is raised on the host that fills the queue: Ports_ReceiveHost. More information is available on the Public MPWiki page. Now, looking at the Message publishing throttling state for the receiving host (green line), we can see that a throttling condition has been reached at 15:47:50: We can also see that the Message publishing delay(ms) (blue line) has begun growing slowly from this point. All of this explains why performances keep decreasing when a singleton keeps processing new messages: the database size grows and when it has exceeded the Message count in database threshold, the host is throttled and the publishing delay keeps increasing. Digging further So, what happens to the database queue then? Is it flushed some day or does it keep growing and growing indefinitely? The real question being: will the host be throttled forever because of this singleton? To answer this question, I set the Message count in database threshold to 20 (this value is very low in order not to wait for too long, otherwise I certainly would have fallen asleep in front of my screen) and I submitted 30 messages. The test was started at 18:26. At 18:56 (ie : exactly 30min later) the throttling was stopped and the database size was divided by 2. 30 min later again, the database size had dropped to almost zero: I guess I’ll have to find some documentation and do some more testing before I sort this out! My guess is that some maintenance job is at work here, though I cannot tell which one Digging even further If we take a look at the Message delivery throttling state counter for the processing host, we can see that this host was also throttled during the submission of the 600 documents: The value for the counter was 1, meaning that Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent) value. We will see this another day… :) A last word Let’s end this article with a warning: DO NOT CHANGE THE THROTTLING SETTINGS LIGHTLY! The temptation can be great to just bypass throttling by setting very high values for each parameter (or zero in some cases, which simply disables throttling). Nevertheless, always keep in mind that this mechanism is here for a very good reason: prevent your BizTalk infrastructure from exploding!! So whatever you do with those settings, do a lot of testing and benchmarking!

    Read the article

  • DBA Command line options

    - by Anthony Shorten
    There are a number of database utilities supplied with the installation of the Oracle Utilities Application Framework based products. These are typically run in interactive mode where the utility prompts you for the values and then executes the required functionality. Did you know that the utilities also have command line options that allow you to run the utility in silent mode as well? You can assess the command line options by specifying the -h option on the command line. Here is an example of the oragensec command line options: oragensec -d <Owner,OwnerPswd,DbName> -u <Database Users> -r <ReadRole,UserRole> -l <logfile> -h where: -d <Owner,OwnerPswd,DbName> Database connect information for the target database. e.g. spladm,spladm,DB200ODB. -u <Database Users> A comma-separated list of database users where synonyms need to be created. e.g. spluser, splread -r <ReadRole,UserRole> Optional. Names of database roles with read and read-write privileges. Default roles are SPL_READ, SPL_USER. e.g. spl_read,spl_user -l <logfile> Optional. Name of the log file. -h Help The command line options allow the DBA to automate the exeucution either via a script or some utility can than execute utilities. This optin can apply to the majority of DBA utilities supplied with the product. Take a look at others.

    Read the article

  • Best way to implement user-powered data validation

    - by vegetables
    I run a product recommendation engine and I'm hitting a few snags. I'm looking to see if anyone has any recommendations on what I should do to minimize these issues. Here's how the site works: Users come to the site and are presented with product recommendations based on some criteria. If a user knows of a product that is not in our system, they can add it by providing the product name and manufacturer. We take that information, and: Hit one API to gather all the product meta-data (and to validate the product spelling, etc). If the product is not in this first API, we do not allow it in our system. Use the information from step 1 to hit another API for pricing information (gathered from many places online). For the sake of discussion, assume that I am searching both APIs in the most efficient/successful manner possible. For the most part, this works very well. I'd say ~80% of our data is perfectly accurate, but there are a few issues: Sometimes the pricing API (Step 2) doesn't have any information for the product. The way the pricing API is built, it will always return something (theoretically, the closest possible match), and there's no guarantee that the product name is spelled exactly the same way in both APIs, so there's no automated way of knowing if it's the right product. When the pricing API finds the right product, occasionally it has outdated, or even invalid pricing data (e.g. if it screen-scraped the wrong price from a website). Since the site was fairly small at first, I was able to manually verify every product that was added to the website. However, the site has grown to the point where this is taking several hours per day, and is just not efficient use of my time. So, my question is: Aside from hiring someone (or getting an intern) to validate all the data manually, what would be the best system of letting my userbase self-manage the data. Specifically, how can I allow users to edit the data while minimizing the risk of someone ambushing my website, or accidentally setting the data incorrectly.

    Read the article

  • VMware ESXi - varying CPU time (CPU reservation)

    - by Tomo
    Hello! I'm running FreeBSD 7.2 under VMware ESXi 3.5. Host has 2 physical CPUs and the BSD box is currently the only running VM. Only one virtual CPU is assigned to the VM. When measuring CPU time of a specific program, I get very different results from time to time. Processor usage is reported differently by VMware, based on the system load. Is it possible to assign a constant share of a physical CPU to specific VM? I would like the CPU time to be more or less much constant. I tried setting CPU reservation when configuring VM in the VMware Infrastructure Client, but the CPU time still varies a lot. Thanks in advance!

    Read the article

  • Does software architect/designer require more skills and intellectual than software engineer (implementation)?

    - by Amumu
    So I heard the positions for designing software and writing spec for developers to implement are higher and getting paid more. I think many companies are using the Software Engineering title to depict the person to implement software, which means using tools and technologies to write the actual code. I know that in order to be a software architecture, one needs to be good at implementation in order to have an architectural overview of a system using a set of specific technologies. This is different than I thought of a Software Engineer. My thinking is similar to the standard of IEEE: A software engineer is an engineer who is capable of going from requirement analysis until the software is deployed, based on the SWEBOK (IEEE). Just look at the table of content. The IEEE even has the certificate for Software Engineering, since ABET (Accreditation Board for Engineering and Technology) seems to not have an official qualification test for Software Engineer (although IEEE is a member of ABET). The two certificates are CSDA and CSDP. I intend to take on these two examination in the future to be qualified as a software engineer, although I am already working as one (Junior position). On a side note on the issues of Software Engineer, you can read the dicussion here: Just a Programmer and Just a Software Engineer. The information of ABET does not accredit Software Engineer is in "Just a Software Engineer". On the other hand, why is Programmer/Softwar Engineer who writes code considered a low level position? Suppose if two people have equal skills after the same years of experience, one becomes a software architect and one keeps focus on implementation aspect of Software Engineering (of course he also has design skill to compose a system, since he's a software engineer as well, but maybe less than the specialized software architect), how comes work from Software Engineer is less complicated than the Software Architect? In order to write great code with turn design into reality, it requires far greater skill than just understanding a particular language and a framework. I don't think the ones who wrote and contributing Linux OS are lower level job and easier than conceptual design and writing spec. Can someone enlighten me?

    Read the article

  • Would you re-design completely under .Net?

    - by dboarman
    A very extensive application began as an Access-based system (for database storage). Forms were written in VB5 and/or VB6. As .Net became a fixture in the development community, certain modules have been rewritten. This seems very awkward and potentially costly just to maintain because of the cross-technologies and extra work to keep the two technologies happy with each other. Of course, the application uses a mix of ODBC OleDb and MySql. Would you think spending the time and resources to completely re-develop the application under .Net would be more cost effective? In an effort to develop a more stable application, wouldn't it make sense to use .Net? Or continue chasing Access bugs, adding new features in .Net (which may or may not create new bugs between .Net and Access), and rewriting old Access modules into .Net modules under time constraints that prevent proper design and development? Update The application uses OleDb and MySql - I corrected my previous statement. Also, to lend further support to rewriting: I have since found out that when the "porting" to .Net began, the VBA/VB6 code that existed was basically translated to the .Net equivalent. From my understanding, nothing was done to improve performance, or take advantage of new libraries or technologies. In my opinion, this creates a very fragile and unstable application. With every new update, this becomes more and more visible. As a help desk technician, I have noticed an increase in problems reported. The customers using the software have noticed an increase in problems and are commenting on it.

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What makes a game a game vs something else like a puzzle or a toy?

    - by Shannon John Clark
    Famously the Sims and similar games have been described by some designers as Toys and not "really" games. I'm curious if there is a good answer to what makes something a game. For example many companies sell Sudoku games - EA has an iPhone one, IronSudoku offers a great web based one, and there are countless others on most platforms. Many newspapers publish Sudoku puzzles in their print editions and often online. What differentiates a game from a puzzle? (or are all Sudoku "games" misnamed?) I'm not convinced there is a simple or easy answer - but I'd love to be proven wrong. I've seen some definitions and emphasize "rules" as core to something being a game (vs. "real life") but puzzles have rules as well - as do many other things. I'm open to answers that either focus only on computer games (on any platform) or which expand to include games and gameplay across many platforms. Here to I'm not fully convinced the lines are clear - is a "game" of D&D played over a virtual tabletop with computer dice rollers, video & audio chat a computer game or something else? (I'd lean towards something else - but where do you draw that line?)

    Read the article

  • .Net Framework corrupted

    - by Samsudeen B
    Hi, We are facing a problem of .Net framework corruption for one our clients with the following environment OS : Windows 2008 Server SP2; Framework : .NET Framework 3.5 SP1; Application Details Database : SQL Server 2008; Server : WCF hosted webservice; Client : WPF based UI; Problem : The Config files inside the "..\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG" are suddenly deleted and and not able to work with my application. Not able to repair .NET / Run SQL Server. The only option is to restore the earlier images versions of that machine Any help is much appreciated sam

    Read the article

  • More FlipBoard Magazines: Azure, XAML, ASP.NET MVC & Web API

    - by dwahlin
    In a previous post I introduced two new FlipBoard magazines that I put together including The AngularJS Magazine and The JavaScript & HTML5 Magazine. FlipBoard magazines provide a great way to keep content organized using a magazine-style format as opposed to trudging through multiple unorganized bookmarks or boring pages full of links. I think they’re really fun to read through as well. Based on feedback and the surprising popularity of the first two magazines I’ve decided to create some additional magazines on topics I like such as The Azure Magazine, The XAML Magazine and The ASP.NET MVC & Web API Magazine. Click on a cover below to get to the magazines using your browser. To subscribe to a given magazine you’ll need to create a FlipBoard account (not required to read the magazines though) which requires an iOS or Android device (the Windows Phone 8 app is coming soon they say). If you have a post or article that you think would be a good fit for any of the magazines please tweet the link to @DanWahlin and I’ll add it to my queue to review. I plan to be pretty strict about keeping articles “on topic” and focused.   The Azure Magazine   The XAML Magazine   The ASP.NET MVC & Web API Magazine   The AngularJS Magazine   The JavaScript & HTML5 Magazine

    Read the article

  • Upgrade OpenSSL 0.9.8k to OpenSSL 1.0.1c on Ubuntu 10.04

    - by Nina
    We're currently using Ubuntu 10.04 and based on the PCI Compliance results, we're told to upgrade our OpenSSL. I attempted to do this using this reference: http://sandilands.info/sgordon/upgrade-latest-version-openssl-on-ubuntu and http://www.lunarforums.com/dedicated_web_hosting_at_lunarpages/upgrading_openssl-t35015.0.html Unfortunately, they didn't work for me. And when I attempted to remove the old version prior to installation, it looks like it broke a few thins in the system. The article from Steve Gordon seemed like it would work for me, but when I ran the openssl version command, it still read that it was the old version. I was wondering if anyone has any suggestions on what I should do. Fix: After following the steps from Steven Gordon, make sure you restart apache and / or restart your computer (I did both, but I'm sure a simple restart will fix it right up).

    Read the article

  • Verbosity Isn’t Always a Bad Thing

    - by PSteele
    There was a message posted to the Rhino.Mocks forums yesterday about verifying a single parameter of a method that accepted 5 parameters.  The code looked like this:   [TestMethod] public void ShouldCallTheAvanceServiceWithTheAValidGuid() { _sut.Send(_sampleInput); _avanceInterface.AssertWasCalled(x => x.SendData( Arg<Guid>.Is.Equal(Guid.Empty), Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<string>.Is.Anything)); } Not the prettiest code, but it does work. I was going to reply that he could use the “GetArgumentsForCallsMadeOn” method to pull out an array that would contain all of the arguments.  A quick check of “args[0]” would be all that he needed.  But then Tim Barcz replied with the following: Just to help allay your fears a bit...this verbosity isn't always a bad thing.  When I read the code, based on the syntax you have used I know that for this particular test no parameters matter except the first...extremely useful in my opinion. An excellent point!  We need to make sure our unit tests are as clear as our code. Technorati Tags: Rhino.Mocks,Unit Testing

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • MS Expression Web 4 SuperPreview – Big Disappointment

    - by smehaffie
    I just downloaded Expression 4 and expected to see some improvements in the Web4 SuperPreview application.  The one main function I was expecting to be in this release is the ability to enter data and click on links so pages of the sites could be assessed.  There a many use cases where this functionality is needed and there were quite a few people vocal about it when MS first released the application. 1) Where you have to login to a site to access either all the content or some of the content on the site 2) Where you have to enter date in a certain order and cannot go to next page until the previous pages data is filled out (payment process, storefront, etc). 3) Where you just want to make sure things are displayed correctly based on data entered (validation messages, etc). 4 ) You need to make sure the links go to the page in all the different browsers.  I have seen scenerios where links worked fine in all but one browser, or for some reason the text showed on screen but it was not a clickable link. IMO this application is a great idea, but until MS fixed the above issue and add the functionality above the SuperPreview is totally worthless unless you need it to test a totally static site that does not require any user input at all to get access to the content.  There is no reason this feature should not have been in this release, and it should have been a priority to make sure it was. Let me know how you feel about the new version of the Web4 SuperPreview application.  Did MS really miss the target on this by not adding this functionality, or do I think it is a bigger deal that it really is?  If you are actively using SuperPreview, please post how you are using it and the type of sites you are using it on.

    Read the article

  • Photoshop Retro Vintage Design Tutorials

    - by Aditi
    Gone are the days when designers only wanted to create high glossy web2.0 gradient rich website designs. Now a days designers are coming up with rugged, retro & vintage themes for their website designs. Colorful or subtle with that worn out look the website seems like a masterpiece. It is not hard to pick up on such Photoshop techniques to master the art of making themes that are retro & vibrant. We have complied a list of tutorials you would like to learn from..rest is in your hands & creativity. Photochrom Vintage Postcard effect Turn your high definition photos into vintage postcards and use them in your website concepts. Learn More Add Retro Look to your Images Give that 1970’s retro look to your images and web concepts. It’s a very easy process using either patterns, brushes, colors or gradients, layer modes and variable opacity. Learn More Brushed metal effect, Just like World War Airplanes texture This is one of a kind photoshop tutorial that teaches how to use  noise and blur filters to create a brushed metal effect unlike other gradient based effects, Also it covers a few layer styles to create airplane graphic. Learn More Transform a New Image into Illustration, Retro Poster Style With the help of this tutorial you can create brilliant poster style or illustrative images and concepts for your new website. This tutorial is superb example of image enhancement & creative use of blending options in photoshop. Learn More Retro Neon Style Text Tutorial Just like the old days, the rainbow neon curvy text format that can be seen on many posters etc, can now be made for your use on website. This tutorial gives you a easy step by step procedure. Learn More Retro Dotted Photo Tutorial Find how to make a dotted poster of your image, pure retro feel. Learn More

    Read the article

  • Validating SSL clients using a list of authorised certificates instead of a Certificate Authority

    - by Gavin Brown
    Is it possible to configure Apache (or any other SSL-aware server) to only accept connections from clients presenting a certificate from a pre-defined list? These certificates may be signed by any CA (and may be self-signed). A while back I tried to get client certificate validation working in the EPP system of the domain registry I work for. The EPP protocol spec mandates use of "mutual strong client-server authentication". In practice, this means that both the client and the server must validate the certificate of the other peer in the session. We created a private certificate authority and asked registrars to submit CSRs, which we then signed. This seemed to us to be the simplest solution, but many of our registrars objected: they were used to obtaining a client certificate from a CA, and submitting that certificate to the registry. So we had to scrap the system. I have been trying to find a way of implementing this system in our server, which is based on the mod_epp module for Apache.

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • .htaccess allow from hostname?

    - by Mikey B
    Ubuntu 9.10 Apache2 Hi Guys, Long story short, I need to restrict access to a certain part of my web site based on a dynamic IP source address that changes every now and then. Historically, I've just added the following to htaccess... order deny,allow deny from all # allow my dynamic IP address allow from <dynamic ip> But the problem is that I'll have to manually make this change every time the IP changes. Ideally I'd like to specify a hostname instead... something like: order deny,allow deny from all # allow my host allow from hostname.whatever.local That doesn't seemed to have worked though. I get an error 403 - access forbidden. Does .htaccess not support hostnames?

    Read the article

  • Error: "failed to connect to wpa_supplicant - wpa_ctrl_open no such file or directory" using netcfg with wpa_supplicant

    - by user1576628
    I'm trying to set up netcfg so that I can finish installing Arch Linux (using the instructions from the Beginners' Guide and netcfg) and I passed over what was meant to be a short step. Open wifi-menu, select network, enter password. After multiple attempts, I decided to edit the profile manually, which yielded no improvement. Eventually I decided to use netfcg with the more familiar wpa_supplicant. My /etc/wpa_supplicant.conf file is as follows: network={ ssid="my_ssid" #psk="my_wireless_passcode" psk="my_wireless_passcode_hex" } (Replacing generic names with my actual ssid and psk.) And my /etc/network.d/wpa_suppl file reads: CONNECTION='wireless' DESCRIPTION='A wpa_supplicant configuration based wireless connection' INTERFACE='wlan0' SECURITY='wpa-config' WPA_CONF='/etc/wpa_supplicant.conf' IP='dhcp' My ssid is not hidden, wlan0 is the proper interface, and wpa_supplicant works fine on its own, but using netcfg wpa_suppl, it returns failed to connect to wpa_supplicant - wpa_ctrl_open no such file or directory about twelve times before finally telling me the authentication failed. What can I do to fix this?

    Read the article

  • Installing Ubuntu One on Ubuntu 11.10 server

    - by Yar
    I have installed "Ubuntu One" on an Ubuntu server 11.10 based on these instructions: How do I configure Ubuntu one on a 11.10 server? Everything went smooth during installation. However when I try the command: u1sdtool --start to get the server up, I get the following stack error: u1sdtool --start /usr/lib/python2.7/dist-packages/gtk-2.0/gtk/init.py:57: GtkWarning: could not open display warnings.warn(str(e), _gtk.Warning) Unhandled Error Traceback (most recent call last): dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11 Does anyone have a clue how to solve this issue?

    Read the article

  • Friday Fun: Omega Crisis

    - by Mysticgeek
    Friday is here once again and it’s time to play a fun flash game on company time. Today we take a look at the space shooter Omega Crisis. Omega Crisis At the start of the game you’re given the basic story of the game, defending the space outpost, and instructions on how to play. Controls are easy, just target the enemy and use the left mouse button to fire. After each level you’re shown the results and how many Tech Points you’ve earned. The more Tech Points you earn, you have a better chance of upgrading your weapons and base defense before the next level.   You can also go into Manage Mode by hitting the Space bar, and select gunners and other types of weapons to help defend the outpost. Choose your mission from the timeline after successfully completing a mission. You can also use A,W,D,S to move around the map and see exactly where the enemy ships are coming from. This makes it easier to destroy them before they get too close to your base. This game is a lot of fun and is similar to different “Desktop Defense” type games. If you’re looking for a fun way to waste the afternoon, and not look at TPS reports, Omega Crisis can get you though until the whistle blows. Play Omega Crisis Similar Articles Productive Geek Tips Friday Fun: Portal, the Flash VersionFriday Fun: Play Bubble QuodFriday Fun: Gravitee 2Friday Fun: Wake Up the BoxFriday Fun: Compulse TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • STUN-server using AWS

    - by Yrlec
    We are about to hire some consultants to help us set up an AWS-based server environment that will enable us to handle NAT Traversal for our P2P application. One important part of the NAT Traversal infrastructure is the STUN-server (http://en.wikipedia.org/wiki/STUN). They just told us that in order for the STUN-server to work you must have two public static IP-addresses pointing to the same server. To more specific they said this: It appears that you need 2 static IPs for each server for the STUN to work. Please note, these IPs have to be put into the configuration file, therefore, each time you restart the instance you have to make sure you either use the same IPs or you have to update configuration. As you plan to use AWS for your installation, please confirm that you can have 2 static IP for each server. Our question is if this is possible using AWS and if so, how? If not, do you know any other way to set up a STUN-server using AWS?

    Read the article

  • Oracle BPM enable BAM by Peter Paul

    - by JuergenKress
    BPMN processes created in the BPM Suite can be monitored by standardized dashboard in the BPM workspace. Besides that there a default views to export Oracle BPM metrics to a data warehouse. And there is another option: BAM – Business Activity Monitoring. BAM takes the monitoring of BPMN processes one step further. BAM allows you to create more advanced dashboards and even real-time alerts. BAM enables you to make decisions based on real-time information gathered from your running processes. With BPMN processes you can use the standard Business Indicators that the BPM Suite offers you and use them to with BAM without much extra effort. However you have to enable BAM in BPM processes. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM,BAM,BPM and BAM,Peter Paul,proces monitoring,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Looking for a manual/how-to for installing a Canon iRunner 210s network printer

    - by Ssvarc
    A client has a Canon iRunner210S and is complaining about slow print speeds. I've searched for a install manual online but came up empty. A call to Canon technical referred me to their dealer network for service, as as informing me that there isn't a manual available on their website. Anyone know how to install these and/or have a link to a manual? The current setup has the printers on a LAN Manager printer port instead of on straight IP ports. Is this normal for this printer? And what exactly are Lan Manager ports (based on my research, they appear to be a way to authenticate a connection)?

    Read the article

  • Need some advice regarding collision detection with the sprite changing its width and height

    - by Frank Scott
    So I'm messing around with collision detection in my tile-based game and everything works fine and dandy using this method. However, now I am trying to implement sprite sheets so my character can have a walking and jumping animation. For one, I'd like to to be able to have each frame of variable size, I think. I want collision detection to be accurate and during a jumping animation the sprite's height will be shorter (because of the calves meeting the hamstrings). Again, this also works fine at the moment. I can get the character to animate properly each frame and cycle through animations. The problems arise when the width and height of the character change. Often times its position will be corrected by the collision detection system and the character will be rubber-banded to random parts of the map or even go outside the map bounds. For some reason with the linked collision detection algorithm, when the width or height of the sprite is changed on the fly, the entire algorithm breaks down. The solution I found so far is to have a single width and height of the sprite that remains constant, and only adjust the source rectangle for drawing. However, I'm not sure exactly what to set as the sprite's constant bounding box because it varies so much with the different animations. So now I'm not sure what to do. I'm toying with the idea of pixel-perfect collision detection but I'm not sure if it would really be worth it. Does anyone know how Braid does their collision detection? My game is also a 2D sidescroller and I was quite impressed with how it was handled in that game. Thanks for reading.

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >