Search Results

Search found 5152 results on 207 pages for 'scheduled tasks'.

Page 119/207 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • RSAT and double accounts

    - by Ryaner
    Since we are looking at migrating our domain admins to use non domain-admin accounts and runas for admin tasks a discussion has begun. How do others use RSAT with runas? I know you can Shift+RightClick run as other user to launch it with admin rights, but it looses the icon on the taskbar. The question also has been put, why do Microsoft release the RSAT tools if they recommend admins to run using non-domain accounts. Edit: Further to this, some of the initial testing with RSAT via the run as other user command hasn't worked out well. Few of the options don't function in the Hyper-V and Failover Cluster Manager.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Linux QoS (Skype / BitTorent / SIP / HTTP priority)

    - by Andre
    We are configuring a linux box that will act as internet gateway for an office of 30-50 computers. We are using iptables/HTB for traffic shaping. Is there a way to match traffic on L7 level? It's easy to identify traffic by TCP/UDP ports (like SIP and HTTP). But what if we are dealing with Skype & BitTorent? It was surprise for me that there is no powerful and matured sulution for tasks like this. I found only l7-filter (http://l7-filter.clearfoundation.com/) patch for the Linux kernel, but it's no longer supported (it seems to). Moreover it couldn't be compiled with modern Linux kernels. The only option I found was to use a Cisco router. Are there other ways to identify and shape Skype and Bittorent traffic?

    Read the article

  • How do I back up Hyper-V VMs with Windows Server backup on Windows Server 2008 R2?

    - by Chris
    I've searched this site and google, and I CAN find information about how to back up Hyper-V virtual machines by using Windows Server Backup from the Hyper-V host in Windows Server 2008. You have to set up a registry key to enable the Hyper-V VSS writer, and then you can take online backups of your VMs. However, all the information I have found is about a year old, and none of it has been updated for Windows Server 2008 R2. I tried to run the "FixIt" .msi found here: http://support.microsoft.com/kb/958662 ... but it said that it was not applicable to my operating system. So I am thinking either Windows Server 2008 R2 already has its VSS service for Hyper-V enabled, or it still needs to be enabled but the FixIt package doesn't feel comfortable operating on an OS that wasn't RTM at the time. I went ahead and scheduled a windows server backup for 9pm tomorrow. It said it would take 86 GB, which means it MUST be counting those VMs. But will this backup fail? Can anyone confirm whether you have to apply the same registry changes for R2?

    Read the article

  • cannot using internet in VMWare

    - by user66247
    I am using VMware Workstation version 7 on Ubuntu 10.10. I installed Windows XP service pack 3 for guest os. Within VMWare, I am using bridge connection that I assigned static IP address to be able to ping host IP address but I cannot ping default router gateway. I also tried to command "/etc/init.d/vmware start" on terminal. All tasks are able to start successfully except "VM communication interface socket family" I am not sure that how to setup network for my VMWare by using wireless. Thanks in advance.

    Read the article

  • Cursor and selection invisible if focus is lost

    - by Alois Mahdal
    "Latest" versions of Excel (I think it's since 2007) have a new added "feature" that if Excel windows loses focus, the cursor becomes invisible. Also coloring of headers is default, so it's impossible to locate cursor and/or selection as soon as I switch to other window. This annoys the hell out of me as it makes Excel almost unusable for most of tasks I need it for: keeping track of test cases while performing testing in another window. obtaining data somewhere else and porting it to Excel (I have never seen such behavior in other applications and can't even think of a justification for it.) Is is possible to turn this behavior off?

    Read the article

  • OpenOffice Vs Microsoft Office 2007/2010

    - by Moody Tech
    I have been asked to summarise the pros and cons in connection with the choices between Microsoft Office Vs OpenOffice. I have a broad idea of what needs to be said. However I would like to open a discussion here and have a single place to go to when the time comes to give the summary to management. There are obvious points of contention: For me the lack of compliance with Group Policy is a major concern [Default save location/visibility of C:/Visibility of files and folders on the HDD] However I am sure that functionality and compatibility will be the prime mover. We are looking at making major savings by reducing our commitment to Microsoft licensing. So what are your experiences? What happens when there are no direct equivalents? [Word has a close match in OpenOffice, but a database solution match is not as close, neither is an Outlook [connecting to Exchange Server and downloading all calendars, shared calendars, scheduled events, for Exchange will still exist after the move to OpenSource solutions] In summary then: What do you see as: The benefits of this plan? How do you see the problems being manifest? Discuss.... Many thanks.

    Read the article

  • Windows Server 2008 Alerting to Low memory

    - by t1nt1n
    I have a file and print server running on Windows 2008 R2 fully patched in a VSphere environment (ESXi 5.1 fully updated). Every evening between 19:20 and 19:30 our monitoring software reported that the available memory is 1% and performance is dire. There is nothing in the event logs to point to an issue. At this point in the evening I am general the only user on the system to check to see why these alerts are going off. Things I have done; Checked to see if any backups are running – None at all. Checked Scheduled tasks – None before or during this time period. Moved the VM to another host. AV is disabled to rule out that as the issue. The server does not have any problems during the day with memory when fully loaded with about 50 users. The server did have 4GB ram provisioned but I have increased this to 5Gb. Running PrefMon at the time (I will save the graphs tonight) There very little CPU usage at the time but RAM usage goes up.

    Read the article

  • Modifying Exchange 2003 accounts in Exchange 2010 management console?

    - by MartinC
    You can look at Exchange 2003 accounts via the 2010 Management console but is modifying supported? No warnings that it is not, and all is held in Active Directory. Adding an additional email address works... But results in Error 4, Keywords "classic" Task Get-MailboxStatistics writing error when processing record of index 0. Error: Microsoft.Exchange.Management.Tasks.MdbAdminTaskException: Mailbox 'domain/OU/account name' doesn't exist in an Exchange 2007 or later mailbox database. Management Console has the updated change, as does ADUC in 2003.

    Read the article

  • What is the easiest way to apply database functionality into my daily life?

    - by Daddy Warbox
    let me try to explain it by listing some of the things I want to do: Submit random thoughts, notes, facts, and to-do tasks of any sort and at any time. Tag each of these submissions freely. Manage these tags centrally. Associate meta-data with submissions and tags. Search, filter, and sort submissions. I want lots of power here. Display views of submissions (including within searches) in a hierarchy. Create said hierarchies easily out by ordering relevant tags. I'm thinking towards some kind of desktop program that allows me to quickly do all of these things. A web service could also work, too, but it will need offline capabilities. I don't want to have to pay for this, if that's possible. Also, as I know regex and SQL, I wouldn't mind solutions involving the use of either.

    Read the article

  • Remote Destkop into Server 2008R2 with Firewall On

    - by Eternal21
    I've got a fresh install of Remove Server 2008 R2, 64 bit. The problem is I can't Remote Destkop into it. I clicked 'Enable Remote Desktop' inside 'Initial Configuration Tasks', and set it to: Allow connections only from computers running Remote Destkop with Network Level Authentication (more secure). The thing is, this used to work just fine, and then it stopped. The only way I can get it to work now is if I turn of Windows Firewall completely off (Public network location settings). Obviously I don't want to run the server with firewall off, so what specific settings in Firewall do I need to disable, or am I doing something wrong?

    Read the article

  • Log off as local "administrator" user, get blank login screen

    - by Force Flow
    I have an imaged lab environment running Windows 7 and attached to a domain. The local Administrator account is enabled for certain maintenance and prep tasks. Every time I logoff from the local Administrator account, it brings me back to the standard Ctrl+Alt+Del login screen. When I press that combination, all the user controls vanish except for the accessibility button down in the left hand corner. The only way I can seem to escape from this is to tap the power button to initiate a shutdown. Windows is up-to-date, and logging off as any other user operates normally. The "hide last user" local security policy option is enabled. Has anyone seen this phenomenon before and how can I stop this from happening?

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • Prevent email to root@domain

    - by kml
    I'm running Ubuntu Server 12.04 as a web server and use Exim4 for sending confirmation emails and such. Is there a way to set a system-wide email address for the root user? In other words, I'd like ALL email to go to a different address rather than [email protected]. For example, this command... echo "test" | mail -v -s test root ...would go to a different address, as well as all cron tasks that root executes: # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )

    Read the article

  • Is there a way to redirect certain URLs to specific web browsers in Linux?

    - by jraxxo
    I'm using Chrome as my default browser in Ubuntu 12.10. I need to use Firefox for business purposes (certain websites pertaining to my work only work with Firefox). Is there a way to force Ubuntu to use Firefox for certain types of URLs (maybe as defined by a regular expression) while maintaining Chrome as my default browser for all my other tasks? Perhaps as a shell script running in the background? I'd like this to work system-wide, covering links from Chrome itself as well as PDFs/ODTs, etc. I have searched for solutions, but I couldn't find anything besides OpenWith, a Firefox extension which adds a button to open certain links in other browsers which would again require me to open Firefox beforehand, which does not help me at all. Does anyone have any ideas? Something like Choosy for Linux?

    Read the article

  • Wait for linux machine to be rebooted

    - by Theo
    I have a small script to install on my remote machine an update. I would like to reboot the machine remotely and if it is rebooted, continue with some more commands. What I currently do is: ssh root@myMachine << COMMANDS_ISSUED ###... Tasks init 6 COMMANDS_ISSUED sleep 180s ssh root@myMachine << POST_REBOOT_COMMANDS ###.... More stuff POST_REBOOT_COMMANDS Is there a more elegant way to do it? Like pinging the machine all 5 seconds up to a maximum of 4 minutes? I play with a few linux machines which have different boot up times and if my script would continue immediately after reboot, this could safe quite some time for me. (Note: I don't want to parallelize execution over all machines as I want to see for each machine if everything worked fine)

    Read the article

  • How to enable/disable authentication without password when executing commands as superuser?

    - by 44taka
    On a Fedora 19 system which I set up for somebody a while ago I noticed that no authentication is required when commands are executed as the superuser. So, for example, when running Yum Extender, configuring the firewall or running some command with sudo in the terminal, I am not asked to provide a password. (With graphical applications the authentication dialog pops up for a few milliseconds.) For better security I would like to disable this automatic (authentication-less) assumption of superuser privileges. I do not remember if or how I enabled this authentication without a password. I might have enabled it for convenience for the non-pro user of this machine, but did not do any "fancy" things (like editing config files) to do so. I did not edit the sudoer file. I just checked that. I might have checked a "Do not ask for password again" checkbox or something similar. Whatever I did, I would like to undo it and enforce authentication for superuser tasks again.

    Read the article

  • Where's my free space gone? (on my mac) [closed]

    - by Cawas
    Possible Duplicate: Something’s slowly eating my HD space Somehow part of my files from my USB disk, 40GB of it (exactly the space I was missing), were copied to /Volumes/ and when I mounted the disk it was called "600GB Disk 2", while "600GB Disk" was filled with duplicated data. All that happened at once, slowly, just this morning when I turned on my macbook. I could notice that thanks to Disk Inventory X. I could actually see those on GrandPerspective, but I thought it was just scanning my USB limited to that folder for whatever reason. On Disk Inventory I could see the /Volumes/ listing one folder as a folder and the second one as a link, like it should be. Well, looking at that folder, I quickly associated what was in it with my scheduled backup on Carbon Copy Cloner. I'm still not sure why the USB disk was mounted with wrong name, but what happened was CCC store the full path information of the source and destination, so when it tried to do the schedule backup it created the path that didn't exist, and copied everything there - while it should be copying into the mounted volume. While this is solved this time, what else could I have done to diagnose this kind of issue, for the next time?

    Read the article

  • nagios service check

    - by DRH
    I am new to nagios and we have a small issue I need to ask assistance with. Many of the machines that we monitor can go unresponsive for a bit when some very intensive cpu tasks are run. This makes nagios send warnings and alerts while these hosts are busy reporting things like 'ping timeout' or 'zombie processes' and even swap space warnings, but in actuality there is not a problem. Is there a way to configure nagios to not send such alerts, but check x number of times over a period of time and only then send an alert at the end of that time if the server in question has not recovered?. Looking at the commands.cfg file, I see entries like this: define command{ command_name check_local_swap command_line $USER1$/check_swap -w $ARG1$ -c $ARG2$ } How could I modify this example to accomplish what I want above. Thanks

    Read the article

  • If I use a facade class with generic methods to access the JPA API, how should I provide additional processing for specific types?

    - by Shaun
    Let's say I'm making a fairly simple web application using JAVA EE specs (I've heard this is possible). In this app, I only have about 10 domain/data objects, and these are represented by JPA Entities. Architecturally, I would consider the JPA API to perform the role of a DAO. Of course, I don't want to use the EntityManager directly in my UI (JSF) and I need to manage transactions, so I delegate these tasks to the so-called service layer. More specifically, I would like to be able to handle these tasks in a single DataService class (often also called CrudService) with generic methods. See this article by Adam Bien for an example interface: http://www.adam-bien.com/roller/abien/entry/generic_crud_service_aka_dao My project differs from that article in that I can't use EJBs, so my service classes are essentially just named beans and I handle transactions manually. Regardless, what I want is a single interface for simple CRUD operations on my data objects because having a different class for each data type would lead to a lot of duplicate and/or unnecessary code. Ideally, my views would be able to use a method such as public <T> List<T> findAll(Class<T> type) { ... } to retrieve data. Using JSF, it might look something like this: <h:dataTable value="#{dataService.findAll(data.class)}" var="d"> ... </h:dataTable> Similarly, after validating forms, my controller could submit the data with a method such as: public <T> void add(T entity) { ... } Granted, you'd probably actually want to return something useful to the caller. In any case, this works well if your data can be treated as homogenous in this manner. Alas, it breaks down when you need to perform additional processing on certain objects before passing them on to JPA. For example, let's say I'm dealing with Books and Authors which have a many-to-many relationship. Each Book has a set of IDs referring to its authors, and each Author has a set of IDs referring to their books. Normally, JPA can manage this kind of relationship for you, but in some cases it can't (for example, the google app engine JPA provider doesn't support this). Thus, when I persist a new book for example, I may need to update the corresponding author entities. My question, then, is if there's an elegant way to handle this or if I should reconsider the sanity of my whole design. Here's a couple ways I see of dealing with it: The instanceof operator. I could use this to target certain classes when special processing is needed. Perhaps maintainability suffers and it isn't beautiful code, but if there's only 10 or so domain objects it can't be all that bad... could it? Make a different service for each entity type (ie, BookService and AuthorService). All services would inherit from a generic DataService base class and override methods if special processing is needed. At this point, you could probably also just call them DAOs instead. As always, I appreciate the help. Let me know if any clarifications are needed, as I left out many smaller details.

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas.

    Read the article

  • SQL SERVER – Concat Strings in SQL Server using T-SQL – SQL in Sixty Seconds #035 – Video

    - by pinaldave
    Concatenating  string is one of the most common tasks in SQL Server and every developer has to come across it. We have to concat the string when we have to see the display full name of the person by first name and last name. In this video we will see various methods to concatenate the strings. SQL Server 2012 has introduced new function CONCAT which concatenates the strings much efficiently. When we concat values with ‘+’ in SQL Server we have to make sure that values are in string format. However, when we attempt to concat integer we have to convert the integers to a string or else it will throw an error. However, with the newly introduce the function of CONCAT in SQL Server 2012 we do not have to worry about this kind of issue. It concatenates strings and integers without casting or converting them. You can specify various values as a parameter to CONCAT functions and it concatenates them together. Let us see how to concat the values in Sixty Seconds: Here is the script which is used in the video. -- Method 1: Concatenating two strings SELECT 'FirstName' + ' ' + 'LastName' AS FullName -- Method 2: Concatenating two Numbers SELECT CAST(1 AS VARCHAR(10)) + ' ' + CAST(2 AS VARCHAR(10)) -- Method 3: Concatenating values of table columns SELECT FirstName + ' ' + LastName AS FullName FROM AdventureWorks2012.Person.Person -- Method 4: SQL Server 2012 CONCAT function SELECT CONCAT('FirstName' , ' ' , 'LastName') AS FullName -- Method 5: SQL Server 2012 CONCAT function SELECT CONCAT('FirstName' , ' ' , 1) AS FullName Related Tips in SQL in Sixty Seconds: SQL SERVER – Concat Function in SQL Server – SQL Concatenation String Function – CONCAT() – A Quick Introduction 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage A Quick Trick about SQL Server 2012 CONCAT Function – PRINT A Quick Trick about SQL Server 2012 CONCAT function What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Export data to Excel from Silverlight/WPF DataGrid

    - by outcoldman
    Data export from DataGrid to Excel is very common task, and it can be solved with different ways, and chosen way depend on kind of app which you are design. If you are developing app for enterprise, and it will be installed on several computes, then you can to advance a claim (system requirements) with which your app will be work for client. Or customer will advance system requirements on which your app should work. In this case you can use COM for export (use infrastructure of Excel or OpenOffice). This approach will give you much more flexibility and give you possibility to use all features of Excel app. About this approach I’ll speak below. Other way – your app is for personal use, it can be installed on any home computer, in this case it is not good to ask user to install MS Office or OpenOffice just for using your app. In this way you can use foreign tools for export, or export to xml/html format which MS Office can read (this approach used by JIRA). But in this case will be more difficult to satisfy user tasks, like create document with landscape rotation and with defined fields for printing. At this article I'll show you how to work with Excel object from .NET 4 and Silverlight 4 with dynamic objects and give you an approach which allow you to export data from DataGrid Silverlight and WPF controls. Read more...

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >